diff --git a/pr-631/design/autoware-architecture/index.html b/pr-631/design/autoware-architecture/index.html index 136e69c2a19..1b00e7322ce 100644 --- a/pr-631/design/autoware-architecture/index.html +++ b/pr-631/design/autoware-architecture/index.html @@ -7633,7 +7633,7 @@

Introduction

High-level architecture design#

Overview

-

Autoware's architecture consists of the following six stacks. Each linked page contains a more detailed set of requirements and use cases specific to that stack:

+

Autoware's architecture consists of the following seven stacks. Each linked page contains a more detailed set of requirements and use cases specific to that stack:

  • The following are under consideration.

    Sequence#

    -

    uml diagram

    +

    uml diagram

    -

    Before the velocity input localization interface, module vehicle_velocity_converter converts message type autoware_vehicle_msgs/msg/VelocityReport to geometry_msgs/msg/TwistWithCovarianceStamped.

    +

    Before the velocity input localization interface, module autoware_vehicle_velocity_converter converts message type autoware_vehicle_msgs/msg/VelocityReport to geometry_msgs/msg/TwistWithCovarianceStamped.

    Outputs#

    Vehicle pose#

    Current pose of ego, calculated from localization interface.

    diff --git a/pr-631/how-to-guides/integrating-autoware/creating-maps/open-source-slam/index.html b/pr-631/how-to-guides/integrating-autoware/creating-maps/open-source-slam/index.html index c5bbd082d8a..bd569bde43e 100644 --- a/pr-631/how-to-guides/integrating-autoware/creating-maps/open-source-slam/index.html +++ b/pr-631/how-to-guides/integrating-autoware/creating-maps/open-source-slam/index.html @@ -7634,10 +7634,19 @@

    List of Third Party +DLIO +Direct LiDAR-Inertial Odometry is a new lightweight LiDAR-inertial odometry algorithm with a novel coarse-to-fine approach in constructing continuous-time trajectories for precise motion correction +github.com/vectr-ucla/direct_lidar_inertial_odometry +✔️ +Lidar
    IMU +ROS 2 +PCL
    Eigen
    OpenMP + + FAST-LIO-LC A computationally efficient and robust LiDAR-inertial odometry package with loop closure module and graph optimization -https://github.com/yanliang-wang/FAST_LIO_LC -✓ +github.com/yanliang-wang/FAST_LIO_LC +✔️ Lidar
    IMU
    GPS [Optional] ROS 1 ROS Melodic
    PCL >= 1.8
    Eigen >= 3.3.4
    GTSAM >= 4.0.0 @@ -7645,8 +7654,8 @@

    List of Third Party FAST_LIO_SLAM FAST_LIO_SLAM is the integration of FAST_LIO and SC-PGO which is scan context based loop detection and GTSAM based pose-graph optimization -https://github.com/gisbi-kim/FAST_LIO_SLAM -✓ +github.com/gisbi-kim/FAST_LIO_SLAM +✔️ Lidar
    IMU
    GPS [Optional] ROS 1 PCL >= 1.8
    Eigen >= 3.3.4 @@ -7654,17 +7663,26 @@

    List of Third Party FD-SLAM FD_SLAM is Feature&Distribution-based 3D LiDAR SLAM method based on Surface Representation Refinement. In this algorithm novel feature-based Lidar odometry used for fast scan-matching, and used a proposed UGICP method for keyframe matching -https://github.com/SLAMWang/FD-SLAM -✓ +github.com/SLAMWang/FD-SLAM +✔️ Lidar
    IMU [Optional]
    GPS ROS 1 PCL
    g2o
    Suitesparse +GenZ-ICP +GenZ-ICP is a Generalizable and Degeneracy-Robust LiDAR Odometry Using an Adaptive Weighting +github.com/cocel-postech/genz-icp +❌ +Lidar +ROS 2 +No extra dependency + + hdl_graph_slam An open source ROS package for real-time 6DOF SLAM using a 3D LIDAR. It is based on 3D Graph SLAM with NDT scan matching-based odometry estimation and loop detection. It also supports several graph constraints, such as GPS, IMU acceleration (gravity vector), IMU orientation (magnetic sensor), and floor plane (detected in a point cloud) -https://github.com/koide3/hdl_graph_slam -✓ +github.com/koide3/hdl_graph_slam +✔️ Lidar
    IMU [Optional]
    GPS [Optional] ROS 1 PCL
    g2o
    OpenMP @@ -7672,8 +7690,8 @@

    List of Third Party IA-LIO-SAM IA_LIO_SLAM is created for data acquisition in unstructured environment and it is a framework for Intensity and Ambient Enhanced Lidar Inertial Odometry via Smoothing and Mapping that achieves highly accurate robot trajectories and mapping -https://github.com/minwoo0611/IA_LIO_SAM -✓ +github.com/minwoo0611/IA_LIO_SAM +✔️ Lidar
    IMU
    GPS ROS 1 GTSAM @@ -7681,35 +7699,62 @@

    List of Third Party ISCLOAM ISCLOAM presents a robust loop closure detection approach by integrating both geometry and intensity information -https://github.com/wh200720041/iscloam -✓ +github.com/wh200720041/iscloam +✔️ Lidar ROS 1 Ubuntu 18.04
    ROS Melodic
    Ceres
    PCL
    GTSAM
    OpenCV +KISS-ICP +A simple and fast ICP algorithm for 3D point cloud registration +github.com/PRBonn/kiss-icp +❌ +Lidar +ROS 2 +No extra dependency + + +Kinematic-ICP +Kinematic-ICP is a LiDAR odometry approach that explicitly incorporates the kinematic constraints of mobile robots into the classic point-to-point ICP algorithm. +github.com/PRBonn/kinematic-icp +❌ +Lidar
    Odometry +ROS 2 +No extra dependency + + LeGO-LOAM-BOR LeGO-LOAM-BOR is improved version of the LeGO-LOAM by improving quality of the code, making it more readable and consistent. Also, performance is improved by converting processes to multi-threaded approach -https://github.com/facontidavide/LeGO-LOAM-BOR -✓ +github.com/facontidavide/LeGO-LOAM-BOR ROS2 fork: /github.com/eperdices/LeGO-LOAM-SR +✔️ Lidar
    IMU -ROS 1 -ROS Melodic
    PCL
    GTSAM +ROS 1
    ROS 2 +ROS 1/2
    PCL
    GTSAM LIO_SAM A framework that achieves highly accurate, real-time mobile robot trajectory estimation and map-building. It formulates lidar-inertial odometry atop a factor graph, allowing a multitude of relative and absolute measurements, including loop closures, to be incorporated from different sources as factors into the system -https://github.com/TixiaoShan/LIO-SAM -✓ +github.com/TixiaoShan/LIO-SAM +✔️ +Lidar
    IMU
    GPS [Optional] +ROS 1
    ROS 2 +PCL
    GTSAM + + +li_slam_ros2 +li_slam package is a combination of lidarslam_ros2 and the LIO-SAM IMU composite method. +github.com/rsasaki0109/li_slam_ros2 +✔️ Lidar
    IMU
    GPS [Optional] -ROS 1
    ROS 2 +ROS 2 PCL
    GTSAM Optimized-SC-F-LOAM An improved version of F-LOAM and uses an adaptive threshold to further judge the loop closure detection results and reducing false loop closure detections. Also it uses feature point-based matching to calculate the constraints between a pair of loop closure frame point clouds and decreases time consumption of constructing loop frame constraints -https://github.com/SlamCabbage/Optimized-SC-F-LOAM -✓ +github.com/SlamCabbage/Optimized-SC-F-LOAM +✔️ Lidar ROS 1 PCL
    GTSAM
    Ceres @@ -7717,8 +7762,8 @@

    List of Third Party SC-A-LOAM A real-time LiDAR SLAM package that integrates A-LOAM and ScanContext. -https://github.com/gisbi-kim/SC-A-LOAM -✓ +github.com/gisbi-kim/SC-A-LOAM +✔️ Lidar ROS 1 GTSAM >= 4.0 @@ -7726,8 +7771,8 @@

    List of Third Party SC-LeGO-LOAM SC-LeGO-LOAM integrated LeGO-LOAM for lidar odometry and 2 different loop closure methods: ScanContext and Radius search based loop closure. While ScanContext is correcting large drifts, radius search based method is good for fine-stitching -https://github.com/irapkaist/SC-LeGO-LOAM -✓ +github.com/gisbi-kim/SC-LeGO-LOAM +✔️ Lidar
    IMU ROS 1 PCL
    GTSAM diff --git a/pr-631/how-to-guides/integrating-autoware/creating-vehicle-and-sensor-model/creating-sensor-model/index.html b/pr-631/how-to-guides/integrating-autoware/creating-vehicle-and-sensor-model/creating-sensor-model/index.html index 1370c9a43b2..b23a8204a87 100644 --- a/pr-631/how-to-guides/integrating-autoware/creating-vehicle-and-sensor-model/creating-sensor-model/index.html +++ b/pr-631/how-to-guides/integrating-autoware/creating-vehicle-and-sensor-model/creating-sensor-model/index.html @@ -8037,14 +8037,14 @@

    Sensor launch -

    The sensing.launch.xml also launches vehicle_velocity_converter package +

    The sensing.launch.xml also launches autoware_vehicle_velocity_converter package for converting autoware_auto_vehicle_msgs::msg::VelocityReport message to geometry_msgs::msg::TwistWithCovarianceStamped for gyro_odometer node. So, be sure your vehicle_interface publishes /vehicle/status/velocity_status topic with autoware_auto_vehicle_msgs::msg::VelocityReport type, or you must update input_vehicle_velocity_topic at sensing.launch.xml.

        ...
    -    <include file="$(find-pkg-share vehicle_velocity_converter)/launch/vehicle_velocity_converter.launch.xml">
    +    <include file="$(find-pkg-share autoware_vehicle_velocity_converter)/launch/vehicle_velocity_converter.launch.xml">
     -     <arg name="input_vehicle_velocity_topic" value="/vehicle/status/velocity_status"/>
     +     <arg name="input_vehicle_velocity_topic" value="<YOUR-VELOCITY-STATUS-TOPIC>"/>
           <arg name="output_twist_with_covariance" value="/sensing/vehicle_velocity_converter/twist_with_covariance"/>
    @@ -8124,7 +8124,7 @@ 

    Lidar Launchinghesai_PandarQT64.launch.xml as an example.

    The nebula_node_container.py creates the Lidar pipeline for autoware, -the pointcloud preprocessing pipeline is constructed for each lidar please check pointcloud_preprocessor package for filters information as well.

    +the pointcloud preprocessing pipeline is constructed for each lidar please check autoware_pointcloud_preprocessor package for filters information as well.

    For example, If you want to change your outlier_filter method, you can modify the pipeline components like this way:

        nodes.append(
    diff --git a/pr-631/how-to-guides/integrating-autoware/creating-vehicle-interface-package/creating-vehicle-interface/index.html b/pr-631/how-to-guides/integrating-autoware/creating-vehicle-interface-package/creating-vehicle-interface/index.html
    index f918c878567..d9576defe44 100644
    --- a/pr-631/how-to-guides/integrating-autoware/creating-vehicle-interface-package/creating-vehicle-interface/index.html
    +++ b/pr-631/how-to-guides/integrating-autoware/creating-vehicle-interface-package/creating-vehicle-interface/index.html
    @@ -7836,7 +7836,7 @@ 

    2. Install or impleme /vehicle/status/turn_indicators_status autoware_auto_vehicle_msgs/msg/TurnIndicatorsReport -This topic reports the steering status of the vehicle. Please check SteeringReport message type for detailed information. +This topic reports the turn indicators status of the vehicle. Please check TurnIndicatorsReport message type for detailed information. /vehicle/status/steering_status @@ -7844,7 +7844,7 @@

    2. Install or impleme This topic reports the steering status of the vehicle. Please check SteeringReport message type for detailed information. -/vehicle/status/velocity_Status +/vehicle/status/velocity_status autoware_auto_vehicle_msgs/msg/VelocityReport This topic gives us the velocity status of the vehicle. Please check VelocityReport message type for detailed information. diff --git a/pr-631/how-to-guides/integrating-autoware/launch-autoware/localization/index.html b/pr-631/how-to-guides/integrating-autoware/launch-autoware/localization/index.html index 25981210889..b4ea6317e8a 100644 --- a/pr-631/how-to-guides/integrating-autoware/launch-autoware/localization/index.html +++ b/pr-631/how-to-guides/integrating-autoware/launch-autoware/localization/index.html @@ -7691,7 +7691,7 @@

    tier4_localization_component.laun
    + <arg name="input_vehicle_twist_with_covariance_topic" value="<YOUR-VEHICLE-TWIST-TOPIC-NAME>"/>
     

    Note: Gyro odometer input topic provided from velocity converter package. This package will be launched at sensor_kit. For more information, -please check velocity converter package.

    +please check velocity converter package.

    Note when using non NDT pose estimator#

    Note

    diff --git a/pr-631/search/search_index.json b/pr-631/search/search_index.json index 9aa0bece061..6c0311772ac 100644 --- a/pr-631/search/search_index.json +++ b/pr-631/search/search_index.json @@ -1 +1 @@ -{"config":{"lang":["en"],"separator":"[\\s\\-]+","pipeline":["stopWordFilter"]},"docs":[{"location":"","title":"Introduction","text":""},{"location":"#autoware-documentation","title":"Autoware Documentation","text":"

    Welcome to the Autoware Documentation! Here, you'll find comprehensive information about Autoware, the forefront of open-source autonomous driving software.

    "},{"location":"#about-autoware","title":"About Autoware","text":"

    Autoware is the world\u2019s leading open-source project dedicated to autonomous driving technology. Built on the Robot Operating System (ROS 2), Autoware facilitates the commercial deployment of autonomous vehicles across various platforms and applications. Discover more about our innovative project at the Autoware Foundation website.

    "},{"location":"#getting-started","title":"Getting started","text":""},{"location":"#step-into-the-world-of-autoware","title":"Step into the World of Autoware","text":"

    Discover the Concept: Start your Autoware journey by exploring the Concept pages. Here, you'll delve into the core ideas behind Autoware, understand its unique features and how it stands out in the field of autonomous driving technology. Learn about the principles guiding its development and the innovative solutions it offers.

    "},{"location":"#set-up-your-environment","title":"Set Up Your Environment","text":"

    Install Autoware: Once you have a grasp of the Autoware philosophy, it's time to bring it into your world. Visit our Installation section for detailed, step-by-step instructions to seamlessly install Autoware and related tools in your environment. Whether you're a beginner or an expert, these pages are designed to guide you through a smooth setup process.

    "},{"location":"#dive-deeper","title":"Dive Deeper","text":"

    Hands-on Experience: After installation, head over to the Tutorials pages. These tutorials are designed to provide you with hands-on experience with some simulations, allowing you to apply what you've learned and gain practical skills. From basic operations to advanced applications, these tutorials cover a wide range of topics to enhance your understanding of Autoware.

    Advanced Topics: As you become more familiar with Autoware, the How-to Guides will be your next destination. These pages delve into more advanced topics, offering deeper insights and more complex scenarios. They're perfect for those looking to expand their knowledge and expertise in specific areas of Autoware. Additionally, this section also covers methods for integrating Autoware with real vehicles, providing practical guidance for those seeking to apply Autoware in real-world settings.

    "},{"location":"#understand-the-design","title":"Understand the Design","text":"

    Design Concepts: For those interested in the architecture and design philosophy of Autoware, the Design pages are a treasure trove. Learn about the architectural decisions, design patterns, and the rationale behind the development of Autoware's various components.

    "},{"location":"#more-information","title":"More Information","text":"

    Datasets: The Datasets pages are a crucial resource for anyone working with Autoware. Here, you'll find a collection of datasets that are compatible with Autoware, providing real-world scenarios and data to test and refine your autonomous driving solutions.

    Support: As you dive deeper into Autoware, you might encounter challenges or have questions. The Support section is designed to assist you in these moments. This area includes FAQs, community forums, and contact information for getting help with Autoware. It's also a great place to connect with other Autoware users and contributors, sharing experiences and solutions.

    Competitions: The Competitions pages are where excitement meets innovation. Autoware regularly hosts or participates in various competitions and challenges, providing a platform for users to test their skills, showcase their work, and contribute to the community. These competitions range from local to global scale, offering opportunities for both beginners and experts to engage and excel. Stay updated with upcoming events and take part in the advancement of autonomous driving technologies.

    "},{"location":"#contribute-and-collaborate","title":"Contribute and Collaborate","text":"

    Become a Part of the Autoware Story: Your journey with Autoware isn't just about learning and using; it's also about contributing and shaping the future of autonomous driving. The Contributing section is more than just a guide; it's an invitation to be part of something bigger. Whether you're a coder, a writer, or an enthusiast, your contributions can propel Autoware forward. Join us, and let's build the future together.

    "},{"location":"#related-documentations","title":"Related Documentations","text":"

    In addition to this page, there are several related documentations to further your knowledge and understanding of Autoware:

    • Autoware Universe Documentation contains technical documentations of each component/function such as localization, planning, etc.
    • Autoware Tools Documentation contains technical documentations of each tools for autonomous driving such as performance analysis, calibration, etc.
    "},{"location":"autoware-competitions/","title":"Autoware Competitions","text":""},{"location":"autoware-competitions/#autoware-competitions","title":"Autoware Competitions","text":"

    This page is a collection of the links to the competitions that are related to the Autoware Foundation.

    Title Status Description Ongoing Autoware / TIER IV Challenge 2023 Date: May 15, 2023 - Nov. 1st, 2023 As one of the main contributors of Autoware, TIER IV has been facing many difficult challenges through development, and TIER IV would like to sponsor a challenge to solve such engineering challenges. Any researchers, students, individuals or organizations are welcome to participate and submit their solution to any of the challenges we propose. Finished Japan Automotive AI Challenge 2023 (Integration) Registration: June 5, 2023 - July 14, 2023 Qualifiers: July 3, 2023 - Aug. 31, 2023 Finals: Nov. 12, 2023 In this competition, we focus on challenging tasks posed by autonomous driving in factory environments and aim to develop Autoware-based AD software that can overcome them. The qualifiers use the digital twin autonomous driving simulator AWSIM to complete specific tasks within a virtual environment. Teams that make it to the finals have the opportunity to run their software on actual vehicles in a test course in Japan. Ongoing Japan Automotive AI Challenge 2023 (Simulation) Registration: Nov 6, 2023 - Dec 28, 2023 Date: Dec 4, 2023 - Jan. 31, 2024 This contest is a software development contest with a motorsports theme. Participants will develop autonomous driving software based on Autoware.Universe, and integrate it into a racing car that runs in the End to End simulation space (AWSIM). The goal is simple, win the race while driving safely!"},{"location":"autoware-competitions/#proposing-new-competition","title":"Proposing New Competition","text":"

    If you want add a new competition to this page, please propose it in a TSC meeting and get confirmation from the AWF.

    "},{"location":"contributing/","title":"Contributing","text":""},{"location":"contributing/#contributing","title":"Contributing","text":"

    Thank you for your interest in contributing! Autoware is supported by people like you, and all types and sizes of contribution are welcome.

    As a contributor, here are the guidelines that we would like you to follow for Autoware and its associated repositories.

    • Code of Conduct
    • What should I know before I get started?
      • Autoware concepts
      • Contributing to open source projects
    • How can I get help?
    • How can I contribute?
      • Participate in discussions
      • Join a working group
      • Report bugs
      • Make a pull request

    Like Autoware itself, these guidelines are being actively developed and suggestions for improvement are always welcome! Guideline changes can be proposed by creating a discussion in the Ideas category.

    "},{"location":"contributing/#code-of-conduct","title":"Code of Conduct","text":"

    To ensure the Autoware community stays open and inclusive, please follow the Code of Conduct.

    If you believe that someone in the community has violated the Code of Conduct, please make a report by emailing conduct@autoware.org.

    "},{"location":"contributing/#what-should-i-know-before-i-get-started","title":"What should I know before I get started?","text":""},{"location":"contributing/#autoware-concepts","title":"Autoware concepts","text":"

    To gain a high-level understanding of Autoware's architecture and design, the following pages provide a brief overview:

    • Autoware architecture
    • Autoware concepts

    For experienced developers, the Autoware interfaces and individual component pages should also be reviewed to understand the inputs and outputs for each component or module at a more detailed level.

    "},{"location":"contributing/#contributing-to-open-source-projects","title":"Contributing to open source projects","text":"

    If you are new to open source projects, we recommend reading GitHub's How to Contribute to Open Source guide for an overview of why people contribute to open source projects, what it means to contribute and much more besides.

    "},{"location":"contributing/#how-can-i-get-help","title":"How can I get help?","text":"

    Do not open issues for general support questions as we want to keep GitHub issues for confirmed bug reports. Instead, open a discussion in the Q&A category. For more details on the support mechanisms for Autoware, refer to the Support guidelines.

    Note

    Issues created for questions or unconfirmed bugs will be moved to GitHub discussions by the maintainers.

    "},{"location":"contributing/#how-can-i-contribute","title":"How can I contribute?","text":""},{"location":"contributing/#discussions","title":"Discussions","text":"

    You can contribute to Autoware by facilitating and participating in discussions, such as:

    • Proposing a new feature to enhance Autoware
    • Joining an existing discussion and expressing your opinion
    • Organizing discussions for other contributors
    • Answering questions and supporting other contributors
    "},{"location":"contributing/#working-groups","title":"Working groups","text":"

    The various working groups within the Autoware Foundation are responsible for accomplishing goals set by the Technical Steering Committee. These working groups are open to everyone, and joining a particular working group will allow you to gain an understanding of current projects, see how those projects are managed within each group and to contribute to issues that will help progress a particular project.

    To see the schedule for upcoming working group meetings, refer to the Autoware Foundation events calendar.

    "},{"location":"contributing/#bug-reports","title":"Bug reports","text":"

    Before you report a bug, please search the issue tracker for the appropriate repository. It is possible that someone has already reported the same issue and that workarounds exist. If you can't determine the appropriate repository, ask the maintainers for help by creating a new discussion in the Q&A category.

    When reporting a bug, you should provide a minimal set of instructions to reproduce the issue. Doing so allows us to quickly confirm and focus on the right problem.

    If you want to fix the bug by yourself that will be appreciated, but you should discuss possible approaches with the maintainers in the issue before submitting a pull request.

    Creating an issue is straightforward, but if you happen to experience any problems then create a Q&A discussion to ask for help.

    "},{"location":"contributing/#pull-requests","title":"Pull requests","text":"

    You can submit pull requests for small changes such as:

    • Minor documentation updates
    • Fixing spelling mistakes
    • Fixing CI failures
    • Fixing warnings detected by compilers or analysis tools
    • Making small changes to a single package

    If your pull request is a large change, the following process should be followed:

    1. Create a GitHub Discussion to propose the change. Doing so allows you to get feedback from other members and the Autoware maintainers and to ensure that the proposed change is in line with Autoware's design philosophy and current development plans. If you're not sure where to have that conversation, then create a new Q&A discussion.

    2. Create an issue following consensus in the discussions

    3. Create a pull request to implement the changes that references the Issue created in step 2

    4. Create documentation for the new addition (if relevant)

    Examples of large changes include:

    • Adding a new feature to Autoware
    • Adding a new documentation page or section

    For more information on how to submit a good pull request, have a read of the pull request guidelines and don't forget to review the required license notations!

    "},{"location":"contributing/license/","title":"License","text":""},{"location":"contributing/license/#license","title":"License","text":"

    Autoware is licensed under Apache License 2.0. Thus all contributions will be licensed as such as per clause 5 of the Apache License 2.0:

    5. Submission of Contributions. Unless You explicitly state otherwise,\n   any Contribution intentionally submitted for inclusion in the Work\n   by You to the Licensor shall be under the terms and conditions of\n   this License, without any additional terms or conditions.\n   Notwithstanding the above, nothing herein shall supersede or modify\n   the terms of any separate license agreement you may have executed\n   with Licensor regarding such Contributions.\n

    Here is an example copyright header to add to the top of a new file:

    Copyright [first year of contribution] The Autoware Contributors\nSPDX-License-Identifier: Apache-2.0\n

    We don't write copyright notations of each contributor here. Instead, we place them in the NOTICE file like the following.

    This product includes code developed by [company name].\nCopyright [first year of contribution] [company name]\n

    Let us know if your legal department has a special request for the copyright notation.

    Currently, the old formats explained here are also acceptable. Those old formats can be replaced by this new format if the original authors agree. Note that we won't write their copyrights to the NOTICE file unless they agree with the new format.

    References:

    • https://opensource.google/docs/copyright/#the-year
    • https://www.linuxfoundation.org/blog/copyright-notices-in-open-source-software-projects/
    • https://www.apache.org/licenses/LICENSE-2.0
    • https://www.apache.org/legal/src-headers.html
    • https://www.apache.org/foundation/license-faq.html
    • https://infra.apache.org/licensing-howto.html
    "},{"location":"contributing/coding-guidelines/","title":"Coding guidelines","text":""},{"location":"contributing/coding-guidelines/#coding-guidelines","title":"Coding guidelines","text":"

    Warning

    Under Construction

    "},{"location":"contributing/coding-guidelines/#common-guidelines","title":"Common guidelines","text":"

    Refer to the following links for now:

    • https://docs.ros.org/en/humble/Contributing/Developer-Guide.html

    Also, keep in mind the following concepts.

    • Keep things consistent.
    • Automate where possible, using simple checks for formatting, syntax, etc.
    • Write comments and documentation in English.
    • Functions that are too complex (low cohesion) should be appropriately split into smaller functions (e.g. less than 40 lines in one function is recommended in the google style guideline).
    • Try to minimize the use of member variables or global variables that have a large scope.
    • Whenever possible, break large pull requests into smaller, manageable PRs (less than 200 lines of change is recommended in some research e.g. here).
    • When it comes to code reviews, don't spend too much time on trivial disagreements. For details see:
      • https://en.wikipedia.org/wiki/Law_of_triviality
      • https://steemit.com/programming/@emrebeyler/code-reviews-and-parkinson-s-law-of-triviality
    • Please follow the guidelines for each language.
      • C++
      • Python
      • Shell script
    "},{"location":"contributing/coding-guidelines/#autoware-style-guide","title":"Autoware Style Guide","text":"

    For Autoware-specific styles, refer to the following:

    • Use the autoware_ prefix for package names.
      • cf. Directory structure guideline, Package name
      • cf. Prefix packages with autoware_
    • Add implementations within the autoware namespace.
      • cf. Class design guideline, Namespaces
      • cf. Prefix packages with autoware_, Option 3:
    • The header files to be exported must be placed in the PACKAGE_NAME/include/autoware/ directory.
      • cf. Directory structure guideline, Exporting headers
    • In CMakeLists.txt, use autoware_package().
      • cf. autoware_cmake README
    "},{"location":"contributing/coding-guidelines/languages/cmake/","title":"CMake","text":""},{"location":"contributing/coding-guidelines/languages/cmake/#cmake","title":"CMake","text":"

    Warning

    Under Construction

    Refer to the following links for now:

    • https://docs.ros.org/en/humble/Contributing/Code-Style-Language-Versions.html#cmake
    "},{"location":"contributing/coding-guidelines/languages/cmake/#use-the-autoware_package-macro","title":"Use the autoware_package macro","text":"

    To reduce duplications in CMakeLists.txt, there is the autoware_package() macro. See the README and use it in your package.

    "},{"location":"contributing/coding-guidelines/languages/cpp/","title":"C++","text":""},{"location":"contributing/coding-guidelines/languages/cpp/#c","title":"C++","text":"

    Warning

    Under Construction

    "},{"location":"contributing/coding-guidelines/languages/cpp/#references","title":"References","text":"

    Follow the guidelines below if a rule is not defined on this page.

    1. https://docs.ros.org/en/humble/Contributing/Code-Style-Language-Versions.html
    2. https://www.autosar.org/fileadmin/standards/adaptive/22-11/AUTOSAR_RS_CPP14Guidelines.pdf
    3. https://isocpp.github.io/CppCoreGuidelines/CppCoreGuidelines

    Also, it is encouraged to apply Clang-Tidy to each file. For the usage, see Applying Clang-Tidy to ROS packages.

    Note that not all rules are covered by Clang-Tidy.

    "},{"location":"contributing/coding-guidelines/languages/cpp/#style-rules","title":"Style rules","text":""},{"location":"contributing/coding-guidelines/languages/cpp/#include-header-files-in-the-defined-order-required-partially-automated","title":"Include header files in the defined order (required, partially automated)","text":""},{"location":"contributing/coding-guidelines/languages/cpp/#rationale","title":"Rationale","text":"
    • Due to indirect dependencies, the include system of C++ makes different behaviors if the header order is different.
    • To reduce unintended bugs, local header files should come first.
    "},{"location":"contributing/coding-guidelines/languages/cpp/#reference","title":"Reference","text":"
    • https://llvm.org/docs/CodingStandards.html#include-style
    "},{"location":"contributing/coding-guidelines/languages/cpp/#example","title":"Example","text":"

    Include the headers in the following order:

    • Main module header
    • Local package headers
    • Other package headers
    • Message headers
    • Boost headers
    • C system headers
    • C++ system headers
    // Compliant\n#include \"my_header.hpp\"\n\n#include \"my_package/foo.hpp\"\n\n#include <package1/foo.hpp>\n#include <package2/bar.hpp>\n\n#include <std_msgs/msg/header.hpp>\n\n#include <iostream>\n#include <vector>\n

    If you use \"\" and <> properly, ClangFormat in pre-commit sorts headers automatically.

    Do not define macros between #include lines because it prevents automatic sorting.

    // Non-compliant\n#include <package1/foo.hpp>\n#include <package2/bar.hpp>\n\n#define EIGEN_MPL2_ONLY\n#include \"my_header.hpp\"\n#include \"my_package/foo.hpp\"\n\n#include <Eigen/Core>\n\n#include <std_msgs/msg/header.hpp>\n\n#include <iostream>\n#include <vector>\n

    Instead, define macros before #include lines.

    // Compliant\n#define EIGEN_MPL2_ONLY\n\n#include \"my_header.hpp\"\n\n#include \"my_package/foo.hpp\"\n\n#include <Eigen/Core>\n#include <package1/foo.hpp>\n#include <package2/bar.hpp>\n\n#include <std_msgs/msg/header.hpp>\n\n#include <iostream>\n#include <vector>\n

    If there are any reasons for defining macros at a specific position, write a comment before the macro.

    // Compliant\n#include \"my_header.hpp\"\n\n#include \"my_package/foo.hpp\"\n\n#include <package1/foo.hpp>\n#include <package2/bar.hpp>\n\n#include <std_msgs/msg/header.hpp>\n\n#include <iostream>\n#include <vector>\n\n// For the foo bar reason, the FOO_MACRO must be defined here.\n#define FOO_MACRO\n#include <foo/bar.hpp>\n
    "},{"location":"contributing/coding-guidelines/languages/cpp/#use-lower-snake-case-for-function-names-required-partially-automated","title":"Use lower snake case for function names (required, partially automated)","text":""},{"location":"contributing/coding-guidelines/languages/cpp/#rationale_1","title":"Rationale","text":"
    • It is consistent with the C++ standard library.
    • It is consistent with other programming languages such as Python and Rust.
    "},{"location":"contributing/coding-guidelines/languages/cpp/#exception","title":"Exception","text":"
    • For member functions of classes inherited from external project classes such as Qt, follow that naming convention.
    "},{"location":"contributing/coding-guidelines/languages/cpp/#reference_1","title":"Reference","text":"
    • https://docs.ros.org/en/humble/The-ROS2-Project/Contributing/Code-Style-Language-Versions.html#function-and-method-naming
    "},{"location":"contributing/coding-guidelines/languages/cpp/#example_1","title":"Example","text":"
    void function_name()\n{\n}\n
    "},{"location":"contributing/coding-guidelines/languages/cpp/#use-upper-camel-case-for-enum-names-required-partially-automated","title":"Use upper camel case for enum names (required, partially automated)","text":""},{"location":"contributing/coding-guidelines/languages/cpp/#rationale_2","title":"Rationale","text":"
    • It is consistent with ROS 2 core packages.
    "},{"location":"contributing/coding-guidelines/languages/cpp/#exception_1","title":"Exception","text":"
    • Enums defined in the rosidl file can use other naming conventions.
    "},{"location":"contributing/coding-guidelines/languages/cpp/#reference_2","title":"Reference","text":"
    • http://wiki.ros.org/CppStyleGuide (Refer to \"15. Enumerations\")
    "},{"location":"contributing/coding-guidelines/languages/cpp/#example_2","title":"Example","text":"
    enum class Color\n{\nRed, Green, Blue\n}\n
    "},{"location":"contributing/coding-guidelines/languages/cpp/#use-lower-snake-case-for-constant-names-required-partially-automated","title":"Use lower snake case for constant names (required, partially automated)","text":""},{"location":"contributing/coding-guidelines/languages/cpp/#rationale_3","title":"Rationale","text":"
    • It is consistent with ROS 2 core packages.
    • It is consistent with std::numbers.
    "},{"location":"contributing/coding-guidelines/languages/cpp/#exception_2","title":"Exception","text":"
    • Constants defined in the rosidl file can use other naming conventions.
    "},{"location":"contributing/coding-guidelines/languages/cpp/#reference_3","title":"Reference","text":"
    • https://en.cppreference.com/w/cpp/numeric/constants
    "},{"location":"contributing/coding-guidelines/languages/cpp/#example_3","title":"Example","text":"
    constexpr double gravity = 9.80665;\n
    "},{"location":"contributing/coding-guidelines/languages/cpp/#count-acronyms-and-contractions-of-compound-words-as-one-word-required-partially-automated","title":"Count acronyms and contractions of compound words as one word (required, partially automated)","text":""},{"location":"contributing/coding-guidelines/languages/cpp/#rationale_4","title":"Rationale","text":"
    • To clarify the boundaries of words when acronyms are consecutive.
    "},{"location":"contributing/coding-guidelines/languages/cpp/#reference_4","title":"Reference","text":"
    • https://rust-lang.github.io/api-guidelines/naming.html#casing-conforms-to-rfc-430-c-case
    "},{"location":"contributing/coding-guidelines/languages/cpp/#example_4","title":"Example","text":"
    class RosApi;\nRosApi ros_api;\n
    "},{"location":"contributing/coding-guidelines/languages/cpp/#do-not-use-the-auto-keywords-with-eigens-expressions-required","title":"Do not use the auto keywords with Eigen's expressions (required)","text":""},{"location":"contributing/coding-guidelines/languages/cpp/#rationale_5","title":"Rationale","text":"
    • Avoid using auto with Eigen::Matrix or Eigen::Vector variables, as it can lead to bugs.
    "},{"location":"contributing/coding-guidelines/languages/cpp/#reference_5","title":"Reference","text":"
    • Eigen, C++11 and the auto keyword
    "},{"location":"contributing/coding-guidelines/languages/cpp/#use-rclcpp_-eg-rclcpp_info-macros-instead-of-printf-or-stdcout-for-logging-required","title":"Use RCLCPP_* (e.g. RCLCPP_INFO) macros instead of printf or std::cout for logging (required)","text":""},{"location":"contributing/coding-guidelines/languages/cpp/#rationale_6","title":"Rationale","text":"
    • Reasons include the following:
      • It allows for consistent log level management. For instance, with RCLCPP_* macros, you can simply set --log_level to adjust the log level uniformly across the application.
      • You can standardize the format using RCUTILS_CONSOLE_OUTPUT_FORMAT.
      • With RCLCPP_* macros, logs are automatically recorded to /rosout. These logs can be saved to a rosbag, which can then be replayed to review the log data.
    "},{"location":"contributing/coding-guidelines/languages/cpp/#reference_6","title":"Reference","text":"
    • Autoware Documentation for Console logging in ROS Node
    "},{"location":"contributing/coding-guidelines/languages/docker/","title":"Docker","text":""},{"location":"contributing/coding-guidelines/languages/docker/#docker","title":"Docker","text":"

    Warning

    Under Construction

    Refer to the following links for now:

    • https://github.com/hadolint/hadolint
    "},{"location":"contributing/coding-guidelines/languages/github-actions/","title":"GitHub Actions","text":""},{"location":"contributing/coding-guidelines/languages/github-actions/#github-actions","title":"GitHub Actions","text":"

    Warning

    Under Construction

    Refer to the following links for now:

    • https://docs.github.com/en/actions/guides
    "},{"location":"contributing/coding-guidelines/languages/markdown/","title":"Markdown","text":""},{"location":"contributing/coding-guidelines/languages/markdown/#markdown","title":"Markdown","text":"

    Warning

    Under Construction

    Refer to the following links for now:

    • https://docs.ros.org/en/foxy/Contributing/Code-Style-Language-Versions.html#markdown-restructured-text-docblocks
    • https://github.com/DavidAnson/markdownlint
    "},{"location":"contributing/coding-guidelines/languages/package-xml/","title":"package.xml","text":""},{"location":"contributing/coding-guidelines/languages/package-xml/#packagexml","title":"package.xml","text":"

    Warning

    Under Construction

    Refer to the following links for now:

    • https://ros.org/reps/rep-0149.html
    • https://github.com/tier4/pre-commit-hooks-ros#prettier-package-xml
    "},{"location":"contributing/coding-guidelines/languages/python/","title":"Python","text":""},{"location":"contributing/coding-guidelines/languages/python/#python","title":"Python","text":"

    Warning

    Under Construction

    Refer to the following links for now:

    • https://docs.ros.org/en/foxy/Contributing/Code-Style-Language-Versions.html#python
    • https://github.com/psf/black
    • https://github.com/PyCQA/isort
    "},{"location":"contributing/coding-guidelines/languages/shell-scripts/","title":"Shell scripts","text":""},{"location":"contributing/coding-guidelines/languages/shell-scripts/#shell-scripts","title":"Shell scripts","text":"

    Warning

    Under Construction

    Refer to the following links for now:

    • https://google.github.io/styleguide/shellguide.html
    • https://github.com/koalaman/shellcheck
    • https://github.com/mvdan/sh
    "},{"location":"contributing/coding-guidelines/ros-nodes/class-design/","title":"Class design","text":""},{"location":"contributing/coding-guidelines/ros-nodes/class-design/#class-design","title":"Class design","text":"

    We'll use the autoware_gnss_poser package as an example.

    "},{"location":"contributing/coding-guidelines/ros-nodes/class-design/#namespaces","title":"Namespaces","text":"
    namespace autoware::gnss_poser\n{\n...\n} // namespace autoware::gnss_poser\n
    • Everything should be under autoware::gnss_poser namespace.
    • Closing braces must contain a comment with the namespace name. (Automated with cpplint)
    "},{"location":"contributing/coding-guidelines/ros-nodes/class-design/#classes","title":"Classes","text":""},{"location":"contributing/coding-guidelines/ros-nodes/class-design/#nodes","title":"Nodes","text":""},{"location":"contributing/coding-guidelines/ros-nodes/class-design/#gnss_poser_nodehpp","title":"gnss_poser_node.hpp","text":"
    class GNSSPoser : public rclcpp::Node\n{\npublic:\nexplicit GNSSPoser(const rclcpp::NodeOptions & node_options);\n...\n}\n
    "},{"location":"contributing/coding-guidelines/ros-nodes/class-design/#gnss_poser_nodecpp","title":"gnss_poser_node.cpp","text":"
    GNSSPoser::GNSSPoser(const rclcpp::NodeOptions & node_options)\n: Node(\"gnss_poser\", node_options)\n{\n...\n}\n
    • The class name should be in CamelCase.
    • Node classes should inherit from rclcpp::Node.
    • The constructor must be marked be explicit.
    • The constructor must take rclcpp::NodeOptions as an argument.
    • Default node name:
      • should not have autoware_ prefix.
      • should NOT have _node suffix.
        • Rationale: Node names are useful in the runtime. And output of ros2 node list will show only nodes anyway. Having _node is redundant.
      • Example: gnss_poser.
    "},{"location":"contributing/coding-guidelines/ros-nodes/class-design/#component-registration","title":"Component registration","text":"
    ...\n} // namespace autoware::gnss_poser\n\n#include <rclcpp_components/register_node_macro.hpp>\nRCLCPP_COMPONENTS_REGISTER_NODE(autoware::gnss_poser::GNSSPoser)\n
    • The component should be registered at the end of the gnss_poser_node.cpp file, outside the namespaces.
    "},{"location":"contributing/coding-guidelines/ros-nodes/class-design/#libraries","title":"Libraries","text":"

    Warning

    Under Construction

    "},{"location":"contributing/coding-guidelines/ros-nodes/console-logging/","title":"Console logging","text":""},{"location":"contributing/coding-guidelines/ros-nodes/console-logging/#console-logging","title":"Console logging","text":"

    ROS 2 logging is a powerful tool for understanding and debugging ROS nodes.

    This page focuses on how to design console logging in Autoware and shows several practical examples. To comprehensively understand how ROS 2 logging works, refer to the logging documentation.

    "},{"location":"contributing/coding-guidelines/ros-nodes/console-logging/#logging-use-cases-in-autoware","title":"Logging use cases in Autoware","text":"
    • Developers debug code by seeing the console logs.
    • Vehicle operators take appropriate risk-avoiding actions depending on the console logs.
    • Log analysts analyze the console logs that are recorded in rosbag files.

    To efficiently support these use cases, clean and highly visible logs are required. For that, several rules are defined below.

    "},{"location":"contributing/coding-guidelines/ros-nodes/console-logging/#rules","title":"Rules","text":""},{"location":"contributing/coding-guidelines/ros-nodes/console-logging/#choose-appropriate-severity-levels-required-non-automated","title":"Choose appropriate severity levels (required, non-automated)","text":""},{"location":"contributing/coding-guidelines/ros-nodes/console-logging/#rationale","title":"Rationale","text":"

    It's confusing if severity levels are inappropriate as follows:

    • Useless messages are marked as FATAL.
    • Very important error messages are marked as INFO.
    "},{"location":"contributing/coding-guidelines/ros-nodes/console-logging/#example","title":"Example","text":"

    Use the following criteria as a reference:

    • DEBUG: Use this level to show debug information for developers. Note that logs with this level is hidden by default.
    • INFO: Use this level to notify events (cyclic notifications during initialization, state changes, service responses, etc.) to operators.
    • WARN: Use this level when a node can continue working correctly, but unintended behaviors might happen.
      • For example, \"path optimization failed but the previous data can be used\", \"the localization score is low\", etc.
    • ERROR: Use this level when a node can't continue working correctly, and unintended behaviors would happen.
      • For example, \"path optimization failed and the path is empty\", \"the vehicle will trigger an emergency stop\", etc.
    • FATAL: Use this level when the entire system can't continue working correctly, and the system must be stopped.
      • For example, \"the vehicle control ECU doesn't respond\", \"the system storage crashed\", etc.
    "},{"location":"contributing/coding-guidelines/ros-nodes/console-logging/#filter-out-unnecessary-logs-by-setting-logging-options-required-non-automated","title":"Filter out unnecessary logs by setting logging options (required, non-automated)","text":""},{"location":"contributing/coding-guidelines/ros-nodes/console-logging/#rationale_1","title":"Rationale","text":"

    Some third-party nodes such as drivers may not follow the Autoware's guidelines. If the logs are noisy, unnecessary logs should be filtered out.

    "},{"location":"contributing/coding-guidelines/ros-nodes/console-logging/#example_1","title":"Example","text":"

    Use the --log-level {level} option to change the minimum level of logs to be displayed:

    <launch>\n<!-- This outputs only FATAL level logs. -->\n<node pkg=\"demo_nodes_cpp\" exec=\"talker\" ros_args=\"--log-level fatal\" />\n</launch>\n

    If you want to disable only specific output targets, use the --disable-stdout-logs, --disable-rosout-logs, and/or --disable-external-lib-logs options:

    <launch>\n<!-- This outputs to rosout and disk. -->\n<node pkg=\"demo_nodes_cpp\" exec=\"talker\" ros_args=\"--disable-stdout-logs\" />\n</launch>\n
    <launch>\n<!-- This outputs to stdout. -->\n<node pkg=\"demo_nodes_cpp\" exec=\"talker\" ros_args=\"--disable-rosout-logs --disable-external-lib-logs\" />\n</launch>\n
    "},{"location":"contributing/coding-guidelines/ros-nodes/console-logging/#use-throttled-logging-when-the-log-is-unnecessarily-shown-repeatedly-required-non-automated","title":"Use throttled logging when the log is unnecessarily shown repeatedly (required, non-automated)","text":""},{"location":"contributing/coding-guidelines/ros-nodes/console-logging/#rationale_2","title":"Rationale","text":"

    If tons of logs are shown on the console, people miss important message.

    "},{"location":"contributing/coding-guidelines/ros-nodes/console-logging/#example_2","title":"Example","text":"

    While waiting for some messages, throttled logs are usually enough. In such cases, wait about 5 seconds as a reference value.

    // Compliant\nvoid FooNode::on_timer() {\nif (!current_pose_) {\nRCLCPP_ERROR_THROTTLE(get_logger(), *get_clock(), 5000, \"Waiting for current_pose_.\");\nreturn;\n}\n}\n\n// Non-compliant\nvoid FooNode::on_timer() {\nif (!current_pose_) {\nRCLCPP_ERROR(get_logger(), \"Waiting for current_pose_.\");\nreturn;\n}\n}\n
    "},{"location":"contributing/coding-guidelines/ros-nodes/console-logging/#exception","title":"Exception","text":"

    The following cases are acceptable even if it's not throttled.

    • The message is really worth displaying every time.
    • The message level is DEBUG.
    "},{"location":"contributing/coding-guidelines/ros-nodes/console-logging/#do-not-depend-on-rclcppnode-in-core-library-classes-but-depend-only-on-rclcpplogginghpp-advisory-non-automated","title":"Do not depend on rclcpp::Node in core library classes but depend only on rclcpp/logging.hpp (advisory, non-automated)","text":""},{"location":"contributing/coding-guidelines/ros-nodes/console-logging/#rationale_3","title":"Rationale","text":"

    Core library classes, which contain reusable algorithms, may also be used for non-ROS platforms. When porting libraries to other platforms, fewer dependencies are preferred.

    "},{"location":"contributing/coding-guidelines/ros-nodes/console-logging/#example_3","title":"Example","text":"
    // Compliant\n#include <rclcpp/logging.hpp>\n\nclass FooCore {\npublic:\nexplicit FooCore(const rclcpp::Logger & logger) : logger_(logger) {}\n\nvoid process() {\nRCLCPP_INFO(logger_, \"message\");\n}\n\nprivate:\nrclcpp::Logger logger_;\n};\n\n// Compliant\n// Note that logs aren't published to `/rosout` if the logger name is different from the node name.\n#include <rclcpp/logging.hpp>\n\nclass FooCore {\nvoid process() {\nRCLCPP_INFO(rclcpp::get_logger(\"foo_core_logger\"), \"message\");\n}\n};\n\n\n// Non-compliant\n#include <rclcpp/node.hpp>\n\nclass FooCore {\npublic:\nexplicit FooCore(const rclcpp::NodeOptions & node_options) : node_(\"foo_core_node\", node_options) {}\n\nvoid process() {\nRCLCPP_INFO(node_.get_logger(), \"message\");\n}\n\nprivate:\nrclcpp::Node node_;\n};\n
    "},{"location":"contributing/coding-guidelines/ros-nodes/console-logging/#tips","title":"Tips","text":""},{"location":"contributing/coding-guidelines/ros-nodes/console-logging/#use-rqt_console-to-filter-logs","title":"Use rqt_console to filter logs","text":"

    To filter logs, using rqt_console is useful:

    ros2 run rqt_console rqt_console\n

    For more details, refer to ROS 2 Documentation.

    "},{"location":"contributing/coding-guidelines/ros-nodes/console-logging/#useful-marco-expressions","title":"Useful marco expressions","text":"

    To debug program, sometimes you need to see which functions and lines of code are executed. In that case, you can use __FILE__, __LINE__ and __FUNCTION__ macro:

    void FooNode::on_timer() {\nRCLCPP_DEBUG(get_logger(), \"file: %s, line: %s, function: %s\" __FILE__, __LINE__, __FUNCTION__);\n}\n

    The example output is as follows:

    [DEBUG] [1671720414.395456931] [foo]: file: /path/to/file.cpp, line: 100, function: on_timer

    "},{"location":"contributing/coding-guidelines/ros-nodes/coordinate-system/","title":"Coordinate system","text":""},{"location":"contributing/coding-guidelines/ros-nodes/coordinate-system/#coordinate-system","title":"Coordinate system","text":""},{"location":"contributing/coding-guidelines/ros-nodes/coordinate-system/#overview","title":"Overview","text":"

    The commonly used coordinate systems include the world coordinate system, the vehicle coordinate system, and the sensor coordinate system.

    • The world coordinate system is a fixed coordinate system that defines the physical space in the environment where the vehicle is located.
    • The vehicle coordinate system is the vehicle's own coordinate system, which defines the vehicle's position and orientation in the world coordinate system.
    • The sensor coordinate system is the sensor's own coordinate system, which is used to define the sensor's position and orientation in the vehicle coordinate system.
    "},{"location":"contributing/coding-guidelines/ros-nodes/coordinate-system/#how-coordinates-are-used-in-autoware","title":"How coordinates are used in Autoware","text":"

    In Autoware, coordinate systems are typically used to represent the position and movement of vehicles and obstacles in space. Coordinate systems are commonly used for path planning, perception and control, can help the vehicle decide how to avoid obstacles and to plan a safe and efficient path of travel.

    1. Transformation of sensor data

      In Autoware, each sensor has a unique coordinate system and their data is expressed in terms of the coordinates. In order to correlate the independent data between different sensors, we need to find the position relationship between each sensor and the vehicle body. Once the installation position of the sensor on the vehicle body is determined, it will remain fixed during running, so the offline calibration method can be used to determine the precise position of each sensor relative to the vehicle body.

    2. ROS TF2

      The TF2 system maintains a tree of coordinate transformations to represent the relationships between different coordinate systems. Each coordinate system is given a unique name and they are connected by coordinate transformations. How to use TF2, refer to the TF2 tutorial.

    "},{"location":"contributing/coding-guidelines/ros-nodes/coordinate-system/#tf-tree","title":"TF tree","text":"

    In Autoware, a common coordinate system structure is shown below:

    graph TD\n    /earth --> /map\n    /map --> /base_link\n    /base_link --> /imu\n    /base_link --> /lidar\n    /base_link --> /gnss\n    /base_link --> /radar\n    /base_link --> /camera_link\n    /camera_link --> /camera_optical_link
    • earth: earth coordinate system describe the position of any point on the earth in terms of geodetic longitude, latitude, and altitude. In Autoware, the earth frame is only used in the GnssInsPositionStamped message.
    • map: map coordinate system is used to represent the location of points on a local map. Geographical coordinate system are mapped into plane rectangular coordinate system using UTM or MGRS. The map frame`s axes point to the East, North, Up directions as explained in Coordinate Axes Conventions.
    • base_link: vehicle coordinate system, the origin of the coordinate system is the center of the rear axle of the vehicle.
    • imu, lidar, gnss, radar: these are sensor frames, transfer to vehicle coordinate system through mounting relationship.
    • camera_link: camera_link is ROS standard camera coordinate system .
    • camera_optical_link: camera_optical_link is image standard camera coordinate system.
    "},{"location":"contributing/coding-guidelines/ros-nodes/coordinate-system/#estimating-the-base_link-frame-by-using-the-other-sensors","title":"Estimating the base_link frame by using the other sensors","text":"

    Generally we don't have the localization sensors physically at the base_link frame. So various sensors localize with respect to their own frames, let's call it sensor frame.

    We introduce a new frame naming convention: x_by_y:

    x: estimated frame name\ny: localization method/source\n

    We cannot directly get the sensor frame. Because we would need the EKF module to estimate the base_link frame first.

    Without the EKF module the best we can do is to estimate Map[map] --> sensor_by_sensor --> base_link_by_sensor using this sensor.

    "},{"location":"contributing/coding-guidelines/ros-nodes/coordinate-system/#example-by-the-gnssins-sensor","title":"Example by the GNSS/INS sensor","text":"

    For the integrated GNSS/INS we use the following frames:

    flowchart LR\n    earth --> Map[map] --> gnss_ins_by_gnss_ins --> base_link_by_gnss_ins

    The gnss_ins_by_gnss_ins frame is obtained by the coordinates from GNSS/INS sensor. The coordinates are converted to map frame using the gnss_poser node.

    Finally gnss_ins_by_gnss_ins frame represents the position of the gnss_ins estimated by the gnss_ins sensor in the map.

    Then by using the static transformation between gnss_ins and the base_link frame, we can obtain the base_link_by_gnss_ins frame. Which represents the base_link estimated by the gnss_ins sensor.

    References:

    • https://www.ros.org/reps/rep-0105.html#earth
    "},{"location":"contributing/coding-guidelines/ros-nodes/coordinate-system/#coordinate-axes-conventions","title":"Coordinate Axes Conventions","text":"

    We are using East, North, Up (ENU) coordinate axes convention by default throughout the stack.

    X+: East\nY+: North\nZ+: Up\n

    The position, orientation, velocity, acceleration are all defined in the same axis convention.

    Position by the GNSS/INS sensor is expected to be in earth frame.

    Orientation, velocity, acceleration by the GNSS/INS sensor are expected to be in the sensor frame. Axes parallel to the map frame.

    If roll, pitch, yaw is provided, they correspond to rotation around X, Y, Z axes respectively.

    Rotation around:\nX+: roll\nY+: pitch\nZ+: yaw\n

    References:

    • https://www.ros.org/reps/rep-0103.html#axis-orientation
    "},{"location":"contributing/coding-guidelines/ros-nodes/coordinate-system/#how-they-can-be-created","title":"How they can be created","text":"
    1. Calibration of sensor

      The conversion relationship between every sensor coordinate system and base_link can be obtained through sensor calibration technology. Please consult the following link calibrating your sensors for instructions on how to calibrate your sensors.

    2. Localization

      The relationship between the base_link coordinate system and the map coordinate system is determined by the position and orientation of the vehicle, and can be obtained from the vehicle localization result.

    3. Geo-referencing of map data

      The geo-referencing information can get the transformation relationship of earth coordinate system to local map coordinate system.

    "},{"location":"contributing/coding-guidelines/ros-nodes/directory-structure/","title":"Directory structure","text":""},{"location":"contributing/coding-guidelines/ros-nodes/directory-structure/#directory-structure","title":"Directory structure","text":"

    This document describes the directory structure of ROS nodes within Autoware.

    We'll use the package autoware_gnss_poser as an example.

    Note that this example does not reflect the actual autoware_gnss_poser and includes extra files and directories to demonstrate all possible package structures.

    "},{"location":"contributing/coding-guidelines/ros-nodes/directory-structure/#c-package","title":"C++ package","text":""},{"location":"contributing/coding-guidelines/ros-nodes/directory-structure/#entire-structure","title":"Entire structure","text":"
    • This is a reference on how the entire package might be structured.
    • A package may not have all the directories shown here.
    autoware_gnss_poser\n\u251c\u2500 package.xml\n\u251c\u2500 CMakeLists.txt\n\u251c\u2500 README.md\n\u2502\n\u251c\u2500 config\n\u2502   \u251c\u2500 gnss_poser.param.yaml\n\u2502   \u2514\u2500 another_non_ros_config.yaml\n\u2502\n\u251c\u2500 schema\n\u2502   \u2514\u2500 gnss_poser.schema.json\n\u2502\n\u251c\u2500 doc\n\u2502   \u251c\u2500 foo_document.md\n\u2502   \u2514\u2500 foo_diagram.svg\n\u2502\n\u251c\u2500 include  # for exporting headers\n\u2502   \u2514\u2500 autoware\n\u2502       \u2514\u2500 gnss_poser\n\u2502           \u2514\u2500 exported_header.hpp\n\u2502\n\u251c\u2500 src\n\u2502   \u251c\u2500 include\n\u2502   \u2502   \u251c\u2500 gnss_poser_node.hpp\n\u2502   \u2502   \u2514\u2500 foo.hpp\n\u2502   \u251c\u2500 gnss_poser_node.cpp\n\u2502   \u2514\u2500 bar.cpp\n\u2502\n\u251c\u2500 launch\n\u2502   \u251c\u2500 gnss_poser.launch.xml\n\u2502   \u2514\u2500 gnss_poser.launch.py\n\u2502\n\u2514\u2500 test\n    \u251c\u2500 test_foo.hpp  # or place under an `include` folder here\n    \u2514\u2500 test_foo.cpp\n
    "},{"location":"contributing/coding-guidelines/ros-nodes/directory-structure/#package-name","title":"Package name","text":"
    • All the packages in Autoware should be prefixed with autoware_.
    • Even if the package is exports a node, the package name should NOT have the _node suffix.
    • The package name should be in snake_case.
    Package Name OK Alternative path_smoother \u274c autoware_path_smoother autoware_trajectory_follower_node \u274c autoware_trajectory_follower autoware_geography_utils \u2705 -"},{"location":"contributing/coding-guidelines/ros-nodes/directory-structure/#package-folder","title":"Package folder","text":"
    autoware_gnss_poser\n\u251c\u2500 package.xml\n\u251c\u2500 CMakeLists.txt\n\u2514\u2500 README.md\n

    The package folder name should be the same as the package name.

    "},{"location":"contributing/coding-guidelines/ros-nodes/directory-structure/#packagexml","title":"package.xml","text":"
    • The package name should be entered within the <name> tag.
      • <name>autoware_gnss_poser</name>
    "},{"location":"contributing/coding-guidelines/ros-nodes/directory-structure/#cmakeliststxt","title":"CMakeLists.txt","text":"
    • The project() command should call the package name.
      • Example: project(autoware_gnss_poser)
    "},{"location":"contributing/coding-guidelines/ros-nodes/directory-structure/#exporting-a-composable-node-component-executables","title":"Exporting a composable node component executables","text":"

    For best practices and system efficiency, it is recommended to primarily use composable node components.

    This method facilitates easier deployment and maintenance within ROS environments.

    ament_auto_add_library(${PROJECT_NAME} SHARED\nsrc/gnss_poser_node.cpp\n)\n\nrclcpp_components_register_node(${PROJECT_NAME}\nPLUGIN \"autoware::gnss_poser::GNSSPoser\"\nEXECUTABLE ${PROJECT_NAME}_node\n)\n
    • If you are building:
      • only a single composable node component, the executable name should start with ${PROJECT_NAME}
      • multiple composable node components, the executable name is up to the developer.
    • All composable node component executables should have the _node suffix.
    "},{"location":"contributing/coding-guidelines/ros-nodes/directory-structure/#exporting-a-standalone-node-executable-without-composition-discouraged-for-most-cases","title":"Exporting a standalone node executable without composition (discouraged for most cases)","text":"

    Use of standalone executables should be limited to cases where specific needs such as debugging or tooling are required.

    Exporting a composable node component executables is generally preferred for standard operational use due its flexibility and scalability within the ROS ecosystem.

    Assuming:

    • src/gnss_poser.cpp has the GNSSPoser class.
    • src/gnss_poser_node.cpp has the main function.
    • There is no composable node component registration.
    ament_auto_add_library(${PROJECT_NAME} SHARED\nsrc/gnss_poser.cpp\n)\n\nament_auto_add_executable(${PROJECT_NAME}_node src/gnss_poser_node.cpp)\n
    • The node executable:
      • should have _node suffix.
      • should start with `${PROJECT_NAME}
    "},{"location":"contributing/coding-guidelines/ros-nodes/directory-structure/#config-and-schema","title":"config and schema","text":"
    autoware_gnss_poser\n\u2502\u2500 config\n\u2502   \u251c\u2500 gnss_poser.param.yaml\n\u2502   \u2514\u2500 another_non_ros_config.yaml\n\u2514\u2500 schema\n    \u2514\u2500 gnss_poser.schema.json\n
    "},{"location":"contributing/coding-guidelines/ros-nodes/directory-structure/#config","title":"config","text":"
    • ROS parameters uses the extension .param.yaml.
    • Non-ROS parameters use the extension .yaml.

    Rationale: Different linting rules are used for ROS parameters and non-ROS parameters.

    "},{"location":"contributing/coding-guidelines/ros-nodes/directory-structure/#schema","title":"schema","text":"

    Place parameter definition files. See Parameters for details.

    "},{"location":"contributing/coding-guidelines/ros-nodes/directory-structure/#doc","title":"doc","text":"
    autoware_gnss_poser\n\u2514\u2500 doc\n    \u251c\u2500 foo_document.md\n    \u2514\u2500 foo_diagram.svg\n

    Place documentation files and link them from the README file.

    "},{"location":"contributing/coding-guidelines/ros-nodes/directory-structure/#include-and-src","title":"include and src","text":"
    • Unless you specifically need to export headers, you shouldn't have a include directory under the package directory.
    • For most cases, follow Not exporting headers.
    • Library packages that export headers may follow Exporting headers.
    "},{"location":"contributing/coding-guidelines/ros-nodes/directory-structure/#not-exporting-headers","title":"Not exporting headers","text":"
    autoware_gnss_poser\n\u2514\u2500 src\n    \u251c\u2500 include\n    \u2502   \u251c\u2500 gnss_poser_node.hpp\n    \u2502   \u2514\u2500 foo.hpp\n    \u2502\u2500 gnss_poser_node.cpp\n    \u2514\u2500 bar.cpp\n\nOR\n\nautoware_gnss_poser\n\u2514\u2500 src\n    \u251c\u2500 gnss_poser_node.hpp\n    \u251c\u2500 gnss_poser_node.cpp\n    \u251c\u2500 foo.hpp\n    \u2514\u2500 bar.cpp\n
    • The source file exporting the node should:
      • have _node suffix.
        • Rationale: To distinguish from other source files.
      • NOT have autoware_ prefix.
        • Rationale: To avoid verbosity.
    • See Class design for more details on how to construct gnss_poser_node.hpp and gnss_poser_node.cpp files.
    • It is up to developer how to organize the source files under src.
      • Note: The include folder under src is optional.
    "},{"location":"contributing/coding-guidelines/ros-nodes/directory-structure/#exporting-headers","title":"Exporting headers","text":"
    autoware_gnss_poser\n\u2514\u2500 include\n    \u2514\u2500 autoware\n        \u2514\u2500 gnss_poser\n            \u2514\u2500 exported_header.hpp\n
    • autoware_gnss_poser/include folder should contain ONLY the autoware folder.
      • Rationale: When installing ROS debian packages, the headers are copied to the /opt/ros/$ROS_DISTRO/include/ directory. This structure is used to avoid conflicts with non-Autoware packages.
    • autoware_gnss_poser/include/autoware folder should contain ONLY the gnss_poser folder.
      • Rationale: Similarly, this structure is used to avoid conflicts with other packages.
    • autoware_gnss_poser/include/autoware/gnss_poser folder should contain the header files to be exported.

    Note: If ament_auto_package() command is used in the CMakeLists.txt file and autoware_gnss_poser/include folder exists, this include folder will be exported to the install folder as part of ament_auto_package.cmake

    Reference: https://docs.ros.org/en/humble/How-To-Guides/Ament-CMake-Documentation.html#adding-targets

    "},{"location":"contributing/coding-guidelines/ros-nodes/directory-structure/#launch","title":"launch","text":"
    autoware_gnss_poser\n\u2514\u2500 launch\n    \u251c\u2500 gnss_poser.launch.xml\n    \u2514\u2500 gnss_poser.launch.py\n
    • You may have multiple launch files here.
    • Unless you have a specific reason, use the .launch.xml extension.
      • Rationale: While the .launch.py extension is more flexible, it comes with a readability cost.
    • Avoid autoware_ prefix in the launch file names.
      • Rationale: To avoid verbosity.
    "},{"location":"contributing/coding-guidelines/ros-nodes/directory-structure/#test","title":"test","text":"
    autoware_gnss_poser\n\u2514\u2500 test\n    \u251c\u2500 test_foo.hpp  # or place under an `include` folder here\n    \u2514\u2500 test_foo.cpp\n

    Place source files for testing. See unit testing for details.

    "},{"location":"contributing/coding-guidelines/ros-nodes/directory-structure/#python-package","title":"Python package","text":"

    Warning

    Under Construction

    "},{"location":"contributing/coding-guidelines/ros-nodes/launch-files/","title":"Launch files","text":""},{"location":"contributing/coding-guidelines/ros-nodes/launch-files/#launch-files","title":"Launch files","text":""},{"location":"contributing/coding-guidelines/ros-nodes/launch-files/#overview","title":"Overview","text":"

    Autoware use ROS 2 launch system to startup the software. Please see the official documentation to get a basic understanding about ROS 2 Launch system if you are not familiar with it.

    "},{"location":"contributing/coding-guidelines/ros-nodes/launch-files/#guideline","title":"Guideline","text":""},{"location":"contributing/coding-guidelines/ros-nodes/launch-files/#the-organization-of-launch-files-in-autoware","title":"The organization of launch files in Autoware","text":"

    Autoware mainly has two repositories related to launch file organization: the autoware.universe and the autoware_launch.

    "},{"location":"contributing/coding-guidelines/ros-nodes/launch-files/#autowareuniverse","title":"autoware.universe","text":"

    the autoware.universe contains the code of the main Autoware modules, and its launch directory is responsible for launching the nodes of each module. Autoware software stack is organized based on the architecture, so you may find that we try to match the launch structure similar to the architecture (splitting of files, namespace). For example, the tier4_map_launch subdirectory corresponds to the map module, so do the other tier4_*_launch subdirectories.

    "},{"location":"contributing/coding-guidelines/ros-nodes/launch-files/#autoware_launch","title":"autoware_launch","text":"

    The autoware_launch is a repository referring to autoware.universe. The mainly purpose of introducing this repository is to provide the general entrance to start the Autoware software stacks, i.e, calling the launch file of each module.

    • The autoware.launch.xml is the basic launch file for road driving scenarios.

      As can be seen from the content, the entire launch file is divided into several different modules, including Vehicle, System, Map, Sensing, Localization, Perception, Planning, Control, etc. By setting the launch_* argument equals to true or false , we can determine which modules to be loaded.

    • The logging_simulator.launch.xml is often used together with the recorded ROS bag to debug if the target module (e.g, Sensing, Localization or Perception) functions normally.
    • The planning_simulator.launch.xml is based on the Planning Simulator tool, mainly used for testing/validation of Planning module by simulating traffic rules, interactions with dynamic objects and control commands to the ego vehicle.
    • The e2e_simulator.launch.xml is the launcher for digital twin simulation environment.
    graph LR\nA11[logging_simulator.launch.xml]-.->A10[autoware.launch.xml]\nA12[planning_simulator.launch.xml]-.->A10[autoware.launch.xml]\nA13[e2e_simulator.launch.xml]-.->A10[autoware.launch.xml]\n\nA10-->A21[tier4_map_component.launch.xml]\nA10-->A22[xxx.launch.py]\nA10-->A23[tier4_localization_component.launch.xml]\nA10-->A24[xxx.launch.xml]\nA10-->A25[tier4_sensing_component.launch.xml]\n\nA23-->A30[localization.launch.xml]\nA30-->A31[pose_estimator.launch.xml]\nA30-->A32[util.launch.xml]\nA30-->A33[pose_twist_fusion_filter.launch.xml]\nA30-->A34[xxx.launch.xml]\nA30-->A35[twist_estimator.launch.xml]\n\nA33-->A41[stop_filter.launch.xml]\nA33-->A42[ekf_localizer.launch.xml]\nA33-->A43[twist2accel.launch.xml]
    "},{"location":"contributing/coding-guidelines/ros-nodes/launch-files/#add-a-new-package-in-autoware","title":"Add a new package in Autoware","text":"

    If a newly created package has executable node, we expect sample launch file and configuration within the package, just like the recommended structure shown in previous directory structure page.

    In order to automatically load the newly added package when starting Autoware, you need to make some necessary changes to the corresponding launch file. For example, if using ICP instead of NDT as the pointcloud registration algorithm, you can modify the autoware.universe/launch/tier4_localization_launch/launch/pose_estimator/pose_estimator.launch.xml file to load the newly added ICP package.

    "},{"location":"contributing/coding-guidelines/ros-nodes/launch-files/#parameter-management","title":"Parameter management","text":"

    Another purpose of introducing the autoware_launch repository is to facilitate the parameter management of Autoware. Thinking about this situation: if we want to integrate Autoware to a specific vehicle and modify parameters, we have to fork autoware.universe which also has a lot of code other than parameters and is frequently updated by developers. By integrating these parameters in autoware_launch, we can customize the Autoware parameters just by forking autoware_launch repository. Taking the localization module as an examples:

    1. all the \u201claunch parameters\u201d for localization component is listed in the files under autoware_launch/autoware_launch/config/localization.
    2. the \"launch parameters\" file paths are set in the autoware_launch/autoware_launch/launch/components/tier4_localization_component.launch.xml file.
    3. in autoware.universe/launch/tier4_localization_launch/launch, the launch files loads the \u201claunch parameters\u201d if the argument is given in the parameter configuration file. You can still use the default parameters in each packages to launch tier4_localization_launch within autoware.universe.
    "},{"location":"contributing/coding-guidelines/ros-nodes/message-guidelines/","title":"Message guidelines","text":""},{"location":"contributing/coding-guidelines/ros-nodes/message-guidelines/#message-guidelines","title":"Message guidelines","text":""},{"location":"contributing/coding-guidelines/ros-nodes/message-guidelines/#format","title":"Format","text":"

    All messages should follow ROS message description specification.

    The accepted formats are:

    • .msg
    • .srv
    • .action
    "},{"location":"contributing/coding-guidelines/ros-nodes/message-guidelines/#naming","title":"Naming","text":"

    Under Construction

    Use Array as a suffix when creating a plural type of a message. This suffix is commonly used in common_interfaces.

    "},{"location":"contributing/coding-guidelines/ros-nodes/message-guidelines/#default-units","title":"Default units","text":"

    All the fields by default have the following units depending on their types:

    type default unit distance meter (m) angle radians (rad) time second (s) speed m/s velocity m/s acceleration m/s\u00b2 angular vel. rad/s angular accel. rad/s\u00b2

    If a field in a message has any of these default units, don't add any suffix or prefix denoting the type.

    "},{"location":"contributing/coding-guidelines/ros-nodes/message-guidelines/#non-default-units","title":"Non-default units","text":"

    For non-default units, use following suffixes:

    type non-default unit suffix distance nanometer _nm distance micrometer _um distance millimeter _mm distance kilometer _km angle degree (deg) _deg time nanosecond _ns time microsecond _us time millisecond _ms time minute _min time hour (h) _hour velocity km/h _kmph

    If a unit that you'd like to use doesn't exist here, create an issue/PR to add it to this list.

    "},{"location":"contributing/coding-guidelines/ros-nodes/message-guidelines/#message-field-types","title":"Message field types","text":"

    For list of types supported by the ROS interfaces see here.

    Also copied here for convenience:

    Message Field Type C++ equivalent bool bool byte uint8_t char char float32 float float64 double int8 int8_t uint8 uint8_t int16 int16_t uint16 uint16_t int32 int32_t uint32 uint32_t int64 int64_t uint64 uint64_t string std::string wstring std::u16string"},{"location":"contributing/coding-guidelines/ros-nodes/message-guidelines/#arrays","title":"Arrays","text":"

    For arrays, use unbounded dynamic array type.

    Example:

    int32[] unbounded_integer_array\n
    "},{"location":"contributing/coding-guidelines/ros-nodes/message-guidelines/#enumerations","title":"Enumerations","text":"

    ROS 2 interfaces don't support enumerations directly.

    It is possible to define integers constants and assign them to a non-constant integer parameter.

    Constants are written in CONSTANT_CASE.

    Assign a different value to each element of a constant.

    Example from shape_msgs/msg/SolidPrimitive.msg

    uint8 BOX=1\nuint8 SPHERE=2\nuint8 CYLINDER=3\nuint8 CONE=4\nuint8 PRISM=5\n\n# The type of the shape\nuint8 type\n
    "},{"location":"contributing/coding-guidelines/ros-nodes/message-guidelines/#comments","title":"Comments","text":"

    On top of the message, briefly explain what the message contains and/or what it is used for. For an example, see sensor_msgs/msg/Imu.msg.

    If necessary, add line comments before the fields that explain the context and/or meaning.

    For simple fields like x, y, z, w you might not need to add comments.

    Even though it is not strictly checked, try not to pass 100 characters in a line.

    Example:

    # Number of times the vehicle performed an emergency brake\nuint32 count_emergency_brake\n\n# Seconds passed since the last emergency brake\nuint64 duration\n
    "},{"location":"contributing/coding-guidelines/ros-nodes/message-guidelines/#example-usages","title":"Example usages","text":"
    • Don't use unit suffixes for default types:
      • Bad: float32 path_length_m
      • Good: float32 path_length
    • Don't prefix the units:
      • Bad: float32 kmph_velocity_vehicle
      • Good: float32 velocity_vehicle_kmph
    • Use recommended suffixes if they are available in the table:
      • Bad: float32 velocity_vehicle_km_h
      • Good: float32 velocity_vehicle_kmph
    "},{"location":"contributing/coding-guidelines/ros-nodes/parameters/","title":"Parameters","text":""},{"location":"contributing/coding-guidelines/ros-nodes/parameters/#parameters","title":"Parameters","text":"

    Autoware ROS nodes have declared parameters which values are provided during the node start up in the form of a parameter file. All the expected parameters with corresponding values should exist in the parameter file. Depending on the application, the parameter values might need to be modified.

    Find more information on parameters from the official ROS documentation:

    • Understanding ROS 2 Parameters
    • About ROS 2 Parameters
    "},{"location":"contributing/coding-guidelines/ros-nodes/parameters/#workflow","title":"Workflow","text":"

    A ROS package which uses the declare_parameter(...) function should:

    • use the declare_parameter(...) without a default value
    • create a parameter file
    • create a schema file

    The rationale behind this workflow is to have a verified single source of truth to pass to the ROS node and to be used in the web documentation. The approach reduces the risk of using invalid parameter values and makes maintenance of documentation easier. This is achieved by:

    • declare_parameter(...) throws an exception if an expected parameter is missing in the parameter file
    • the schema validates the parameter file in the CI and renders a parameter table, as depicted in the graphics below

      flowchart TD\n    NodeSchema[Schema file: *.schema.json]\n    ParameterFile[Parameter file: *.param.yaml]\n    WebDocumentation[Web documentation table]\n\n    NodeSchema -->|Validation| ParameterFile\n    NodeSchema -->|Generate| WebDocumentation

    Note: a parameter value can still be modified and bypass the validation, as there is no validation during runtime.

    "},{"location":"contributing/coding-guidelines/ros-nodes/parameters/#declare-parameter-function","title":"Declare Parameter Function","text":"

    It is the declare_parameter(...) function which sets the parameter values during a node startup.

    declare_parameter<INSERT_TYPE>(\"INSERT_PARAMETER_1_NAME\"),\ndeclare_parameter<INSERT_TYPE>(\"INSERT_PARAMETER_N_NAME\")\n

    As there is no default_value provided, the function throws an exception if a parameter were to be missing in the provided *.param.yaml file. Use a type from the C++ Type column in the table below for the declare_parameter(...) function, replacing INSERT_TYPE.

    ParameterType Enum C++ Type PARAMETER_BOOL bool PARAMETER_INTEGER int64_t PARAMETER_DOUBLE double PARAMETER_STRING std::string PARAMETER_BYTE_ARRAY std::vector<uint8_t> PARAMETER_BOOL_ARRAY std::vector<bool> PARAMETER_INTEGER_ARRAY std::vector<int64_t> PARAMETER_DOUBLE_ARRAY std::vector<double> PARAMETER_STRING_ARRAY std::vector<std::string>

    The table has been derived from Parameter Type and Parameter Value.

    See example: Lidar Apollo Segmentation TVM Nodes declare function

    "},{"location":"contributing/coding-guidelines/ros-nodes/parameters/#parameter-file","title":"Parameter File","text":"

    The parameter file is minimal as there is no need to provide the user with additional information, e.g., description or type. This is because the associated schema file provides the additional information. Use the template below as a starting point for a ROS node.

    /**:\nros__parameters:\nINSERT_PARAMETER_1_NAME: INSERT_PARAMETER_1_VALUE\nINSERT_PARAMETER_N_NAME: INSERT_PARAMETER_N_VALUE\n

    Note: /** is used instead of the explicit node namespace, this allows the parameter file to be passed to a ROS node which has been remapped.

    To adapt the template to the ROS node, replace each INSERT_PARAMETER_..._NAME and INSERT_PARAMETER_..._VALUE for all parameters. Each declare_parameter(...) takes one parameter as input. All the parameter files should have the .param.yaml suffix so that the auto-format can be applied properly.

    Autoware has the following two types of parameter files for ROS packages:

    • Node parameter file
      • Node parameter files store the default parameters provided for each package in Autoware.
        • For example, the parameter of behavior_path_planner
      • All nodes in Autoware must have a parameter file if ROS parameters are declared in the node.
      • For FOO_package, the parameter is expected to be stored in FOO_package/config.
      • The launch file for individual packages must load node parameter by default:
    <launch>\n<arg name=\"foo_node_param_path\" default=\"$(find-pkg-share FOO_package)/config/foo_node.param.yaml\" />\n\n<node pkg=\"FOO_package\" exec=\"foo_node\">\n...\n    <param from=\"$(var foo_node_param_path)\" />\n</node>\n</launch>\n
    • Launch parameter file
      • When a user creates a launch package for the user's vehicle, the user should copy node parameter files for the nodes that are called in the launch file as \"launch parameter files\".
      • Launch parameter files are then customized specifically for user's vehicle.
        • For example, the customized parameter of behavior_path_planner stored under autoware_launch
      • The examples for launch parameter files are stored under autoware_launch.
    "},{"location":"contributing/coding-guidelines/ros-nodes/parameters/#json-schema","title":"JSON Schema","text":"

    JSON Schema is used the validate the parameter file(s) ensuring that it has the correct structure and content. Using JSON Schema for this purpose is considered best practice for cloud-native development. The schema template below shall be used as a starting point when defining the schema for a ROS node.

    {\n\"$schema\": \"http://json-schema.org/draft-07/schema#\",\n\"title\": \"INSERT_TITLE\",\n\"type\": \"object\",\n\"definitions\": {\n\"INSERT_ROS_NODE_NAME\": {\n\"type\": \"object\",\n\"properties\": {\n\"INSERT_PARAMETER_1_NAME\": {\n\"type\": \"INSERT_TYPE\",\n\"description\": \"INSERT_DESCRIPTION\",\n\"default\": \"INSERT_DEFAULT\",\n\"INSERT_BOUND_CONDITION(S)\": INSERT_BOUND_VALUE(S)\n},\n\"INSERT_PARAMETER_N_NAME\": {\n\"type\": \"INSERT_TYPE\",\n\"description\": \"INSERT_DESCRIPTION\",\n\"default\": \"INSERT_DEFAULT\",\n\"INSERT_BOUND_CONDITION(S)\": INSERT_BOUND_VALUE(S)\n}\n},\n\"required\": [\"INSERT_PARAMETER_1_NAME\", \"INSERT_PARAMETER_N_NAME\"],\n\"additionalProperties\": false\n}\n},\n\"properties\": {\n\"/**\": {\n\"type\": \"object\",\n\"properties\": {\n\"ros__parameters\": {\n\"$ref\": \"#/definitions/INSERT_ROS_NODE_NAME\"\n}\n},\n\"required\": [\"ros__parameters\"],\n\"additionalProperties\": false\n}\n},\n\"required\": [\"/**\"],\n\"additionalProperties\": false\n}\n

    The schema file path is INSERT_PATH_TO_PACKAGE/schema/ and the schema file name is INSERT_NODE_NAME.schema.json. To adapt the template to the ROS node, replace each INSERT_... and add all parameters 1..N.

    See example: Image Projection Based Fusion - Pointpainting schema

    "},{"location":"contributing/coding-guidelines/ros-nodes/parameters/#attributes","title":"Attributes","text":"

    Parameters have several attributes, some are required and some optional. The optional attributes are highly encouraged when applicable, as they provide useful information about a parameter and can ensure the value of the parameter is within its bounds.

    "},{"location":"contributing/coding-guidelines/ros-nodes/parameters/#required","title":"Required","text":"
    • name
    • type
      • see JSON Schema types
    • description
    "},{"location":"contributing/coding-guidelines/ros-nodes/parameters/#optional","title":"Optional","text":"
    • default
      • a tested and verified value, see JSON Schema default
    • bound(s)
      • type dependent, e.g., integer, range and size
    "},{"location":"contributing/coding-guidelines/ros-nodes/parameters/#tips-and-tricks","title":"Tips and Tricks","text":"

    Using well established standards enables the use of conventional tooling. Below is an example of how to link a schema to the parameter file(s) using VS Code. This enables a developer with convenient features such as auto-complete and parameter bound validation.

    In the root directory of where the project is hosted, create a .vscode folder with two files; extensions.json containing

    {\n\"recommendations\": [\"redhat.vscode-yaml\"]\n}\n

    and settings.json containing

    {\n\"yaml.schemas\": {\n\"./INSERT_PATH_TO_PACKAGE/schema/INSERT_NODE_NAME.schema.json\": \"**/INSERT_NODE_NAME/config/*.param.yaml\"\n}\n}\n

    The RedHat YAML extension enables validation of YAML files using JSON Schema and the \"yaml.schemas\" setting associates the *.schema.json file with all *.param.yaml files in the config/ folder.

    "},{"location":"contributing/coding-guidelines/ros-nodes/task-scheduling/","title":"Task scheduling","text":""},{"location":"contributing/coding-guidelines/ros-nodes/task-scheduling/#task-scheduling","title":"Task scheduling","text":"

    Warning

    Under Construction

    "},{"location":"contributing/coding-guidelines/ros-nodes/topic-namespaces/","title":"Topic namespaces","text":""},{"location":"contributing/coding-guidelines/ros-nodes/topic-namespaces/#topic-namespaces","title":"Topic namespaces","text":""},{"location":"contributing/coding-guidelines/ros-nodes/topic-namespaces/#overview","title":"Overview","text":"

    ROS allows topics, parameters and nodes to be namespaced which provides the following benefits:

    • Multiple instances of the same node type will not cause naming clashes.
    • Topics published by a node can be automatically namespaced with the node's namespace providing a meaningful and easily-visible connection.
    • Keeps from cluttering the root namespace.
    • Helps to maintain separation-of-concerns.

    This page focuses on how to use namespaces in Autoware and shows some useful examples. For basic information on topic namespaces, refer to this tutorial.

    "},{"location":"contributing/coding-guidelines/ros-nodes/topic-namespaces/#how-topics-should-be-named-in-node","title":"How topics should be named in node","text":"

    Autoware divides the node into the following functional categories, and adds the start namespace for the nodes according to the categories.

    • localization
    • perception
    • planning
    • control
    • sensing
    • vehicle
    • map
    • system

    When a node is run in a namespace, all topics which that node publishes are given that same namespace. All nodes in the Autoware stack must support namespaces by avoiding practices such as publishing topics in the global namespace.

    In general, topics should be namespaced based on the function of the node which produces them and not the node (or nodes) which consume them.

    Classify topics as input or output topics based on they are subscribed or published by the node. In the node, input topic is named input/topic_name and output topic is named output/topic_name.

    Configure the topic in the node's launch file. Take the joy_controller node as an example, in the following example, set the input and output topics and remap topics in the joy_controller.launch.xml file.

    <launch>\n<arg name=\"input_joy\" default=\"/joy\"/>\n<arg name=\"input_odometry\" default=\"/localization/kinematic_state\"/>\n\n<arg name=\"output_control_command\" default=\"/external/$(var external_cmd_source)/joy/control_cmd\"/>\n<arg name=\"output_external_control_command\" default=\"/api/external/set/command/$(var external_cmd_source)/control\"/>\n<arg name=\"output_shift\" default=\"/api/external/set/command/$(var external_cmd_source)/shift\"/>\n<arg name=\"output_turn_signal\" default=\"/api/external/set/command/$(var external_cmd_source)/turn_signal\"/>\n<arg name=\"output_heartbeat\" default=\"/api/external/set/command/$(var external_cmd_source)/heartbeat\"/>\n<arg name=\"output_gate_mode\" default=\"/control/gate_mode_cmd\"/>\n<arg name=\"output_vehicle_engage\" default=\"/vehicle/engage\"/>\n\n<node pkg=\"joy_controller\" exec=\"joy_controller\" name=\"joy_controller\" output=\"screen\">\n<remap from=\"input/joy\" to=\"$(var input_joy)\"/>\n<remap from=\"input/odometry\" to=\"$(var input_odometry)\"/>\n\n<remap from=\"output/control_command\" to=\"$(var output_control_command)\"/>\n<remap from=\"output/external_control_command\" to=\"$(var output_external_control_command)\"/>\n<remap from=\"output/shift\" to=\"$(var output_shift)\"/>\n<remap from=\"output/turn_signal\" to=\"$(var output_turn_signal)\"/>\n<remap from=\"output/gate_mode\" to=\"$(var output_gate_mode)\"/>\n<remap from=\"output/heartbeat\" to=\"$(var output_heartbeat)\"/>\n<remap from=\"output/vehicle_engage\" to=\"$(var output_vehicle_engage)\"/>\n</node>\n</launch>\n
    "},{"location":"contributing/coding-guidelines/ros-nodes/topic-namespaces/#topic-names-in-the-code","title":"Topic names in the code","text":"
    1. Have ~ so that namespace in launch configuration is applied(should not start from root /).

    2. Have ~/input ~/output namespace before topic name used to communicate with other nodes.

      e.g., In node obstacle_avoidance_planner, using topic names of type ~/input/topic_name to subscribe to topics.

      objects_sub_ = create_subscription<PredictedObjects>(\n\"~/input/objects\", rclcpp::QoS{10},\nstd::bind(&ObstacleAvoidancePlanner::onObjects, this, std::placeholders::_1));\n

      e.g., In node obstacle_avoidance_planner, using topic names of type ~/output/topic_name to publish topic.

      traj_pub_ = create_publisher<Trajectory>(\"~/output/path\", 1);\n
    3. Visualization or debug purpose topics should have ~/debug/ namespace.

      e.g., In node obstacle_avoidance_planner, in order to debug or visualizing topics, using topic names of type ~/debug/topic_name to publish information.

      debug_markers_pub_ =\ncreate_publisher<visualization_msgs::msg::MarkerArray>(\"~/debug/marker\", durable_qos);\n\ndebug_msg_pub_ =\ncreate_publisher<tier4_debug_msgs::msg::StringStamped>(\"~/debug/calculation_time\", 1);\n

      The launch configured namespace will be add the topics before, so the topic names will be as following:

      /planning/scenario_planning/lane_driving/motion_planning/obstacle_avoidance_planner/debug/marker /planning/scenario_planning/lane_driving/motion_planning/obstacle_avoidance_planner/debug/calculation_time

    4. Rationale: we want to make topic names remapped and configurable from launch files.

    "},{"location":"contributing/coding-guidelines/ros-nodes/topic-message-handling/","title":"Topic message handling guideline","text":""},{"location":"contributing/coding-guidelines/ros-nodes/topic-message-handling/#topic-message-handling-guideline","title":"Topic message handling guideline","text":""},{"location":"contributing/coding-guidelines/ros-nodes/topic-message-handling/#introduction","title":"Introduction","text":"

    Here is coding guideline for topic message handling in Autoware. It includes the recommended manner than conventional one, which is roughly explained in Discussions page. Refer to the page to understand the basic concept of the recommended manner. You can find sample source code in ros2_subscription_examples referred from this document.

    "},{"location":"contributing/coding-guidelines/ros-nodes/topic-message-handling/#conventional-message-handling-manner","title":"Conventional message handling manner","text":"

    At first, let us see a conventional manner of handling messages that is commonly used. ROS 2 Tutorials is one of the most cited references for ROS 2 applications, including Autoware. It implicitly recommends that each of messages received by subscriptions should be referred to and processed by a dedicated callback function. Autoware follows that manner thoroughly.

      steer_sub_ = create_subscription<SteeringReport>(\n\"input/steering\", 1,\n[this](SteeringReport::SharedPtr msg) { current_steer_ = msg->steering_tire_angle; });\n

    In the code above, when a topic message whose name is input/steering is received, an anonymous function whose description is {current_steer_ = msg->steering_tier_angle;} is executed as a callback in a thread. The callback function is always executed when the message is received, which leads to waste computing resource if the message is not always necessary. Besides, waking up a thread costs computational overhead.

    "},{"location":"contributing/coding-guidelines/ros-nodes/topic-message-handling/#recommended-manner","title":"Recommended manner","text":"

    This section introduces a recommended manner to take a message using Subscription->take() method only when the message is needed. The sample code given below shows that Subscription->take() method is called during execution of any callback function. In most cases, Subscription->take() method is called before a received message is consumed by a main logic. In this case, a topic message is retrieved from the subscription queue, the queue embedded in the subscription object, instead of using a callback function. To be precise, you have to program your code so that a callback function is not automatically called.

      SteeringReport msg;\nrclcpp::MessageInfo msg_info;\nif (sub_->take(msg, msg_info)) {\n// processing and publishing after this\n

    Using this manner has the following benefits.

    • It can reduce the number of calls to subscription callback functions
    • There is no need to take a topic message from a subscription that a main logic does not consume
    • There is no mandatory thread waking for the callback function, which leads to multi-threaded programming, data races and exclusive locking
    "},{"location":"contributing/coding-guidelines/ros-nodes/topic-message-handling/#manners-to-handle-topic-message-data","title":"Manners to handle topic message data","text":"

    This section introduces four manners, including the recommended ones.

    "},{"location":"contributing/coding-guidelines/ros-nodes/topic-message-handling/#1-obtain-data-by-calling-subscription-take","title":"1. Obtain data by calling Subscription->take()","text":"

    To use the recommended manner using Subscription->take(), you basically need to do two things below.

    1. Prevent a callback function from being called when a topic message is received
    2. Call take() method of a subscription object when a topic message is needed

    You can see an example of the typical use of take() method in ros2_subscription_examples/simple_examples/src/timer_listener.cpp.

    "},{"location":"contributing/coding-guidelines/ros-nodes/topic-message-handling/#prevent-calling-a-callback-function","title":"Prevent calling a callback function","text":"

    To prevent a callback function from being called automatically, the callback function has to belong a callback group whose callback functions are not added to any executor. According to the API specification of create_subscription, registering a callback function to a rclcpp::Subscription based object is mandatory even if the callback function has no operation. Here is a sample code snippet from ros2_subscription_examples/simple_examples/src/timer_listener.cpp.

        rclcpp::CallbackGroup::SharedPtr cb_group_not_executed = this->create_callback_group(\nrclcpp::CallbackGroupType::MutuallyExclusive, false);\nauto subscription_options = rclcpp::SubscriptionOptions();\nsubscription_options.callback_group = cb_group_not_executed;\n\nrclcpp::QoS qos(rclcpp::KeepLast(10));\nif (use_transient_local) {\nqos = qos.transient_local();\n}\n\nsub_ = create_subscription<std_msgs::msg::String>(\"chatter\", qos, not_executed_callback, subscription_options);\n

    In the code above, cb_group_not_executed is created by calling create_callback_group with the second argument false. Any callback function which belongs to the callback group will not be called by an executor. If the callback_group member of subscription_options is set to cb_group_not_executed, then not_executed_callback will not be called when a corresponding topic message chatter is received. The second argument to create_callback_group is defined as follows.

    rclcpp::CallbackGroup::SharedPtr create_callback_group(rclcpp::CallbackGroupType group_type, \\\n                                  bool automatically_add_to_executor_with_node = true)\n

    When automatically_add_to_executor_with_node is set to true, callback functions included in a node that is added to an executor will be automatically called by the executor.

    "},{"location":"contributing/coding-guidelines/ros-nodes/topic-message-handling/#call-take-method-of-subscription-object","title":"Call take() method of Subscription object","text":"

    To take a topic message from the Subscription based object, the take() method is called at the expected time. Here is a sample code snippet from ros2_subscription_examples/simple_examples/src/timer_listener.cpp using take() method.

      std_msgs::msg::String msg;\nrclcpp::MessageInfo msg_info;\nif (sub_->take(msg, msg_info)) {\nRCLCPP_INFO(this->get_logger(), \"Catch message\");\nRCLCPP_INFO(this->get_logger(), \"I heard: [%s]\", msg.data.c_str());\n

    In the code above, take(msg, msg_info) is called by sub_ object instantiated from the rclcpp::Subscription class. It is called in a timer driven callback function. msg and msg_info indicate a message body and its metadata respectively. If there is a message in the subscription queue when take(msg, msg_info) is called, then the message is copied to msg. take(msg, msg_info) returns true if a message is successfully taken from the subscription. In this case, the above code prints out a string data of the message from RCLCPP_INFO. take(msg, msg_info) returns false if a message is not taken from the subscription. When take(msg, msg_info) is called, if the size of the subscription queue is greater than one and there are two or more messages in the queue, then the oldest message is copied to msg. If the size of the queue is one, the latest message is always obtained.

    Note

    You can check the presence of incoming message with the returned value of take() method. However, you have to take care of the destructive nature of the take() method. The take() method modifies the subscription queue. Also, the take() method is irreversible and there is no undo operation against the take() method. Checking the incoming message with only the take() method always changes the subscription queue. If you want to check without changing the subscription queue, rclcpp::WaitSet is recommended. Refer to [supplement] Use rclcpp::WaitSet for more detail.

    Note

    The take() method is supported to only obtain a message which is passed through DDS as an inter-process communication. You must not use it for an intra-process communication because intra-process communication is based on another software stack of rclcpp. Refer to [supplement] Obtain a received message through intra-process communication in case of intra-process communication.

    "},{"location":"contributing/coding-guidelines/ros-nodes/topic-message-handling/#11-obtain-serialized-message-from-subscription","title":"1.1 Obtain Serialized Message from Subscription","text":"

    ROS 2 provides Serialized Message function which supports communication with arbitrary message types as described in Class SerializedMessage. It is used by topic_state_monitor in Autoware. You have to use the take_serialized() method instead of the take() method to obtain a rclcpp::SerializedMessage based message from a subscription.

    Here is a sample code snippet from ros2_subscription_examples/simple_examples/src/timer_listener_serialized_message.cpp.

          // receive the serialized message.\nrclcpp::MessageInfo msg_info;\nauto msg = sub_->create_serialized_message();\n\nif (sub_->take_serialized(*msg, msg_info) == false) {\nreturn;\n}\n

    In the code above, msg is created by create_serialized_message() to store a received message, whose type is std::shared_ptr<rclcpp::SerializedMessage>. You can obtain a message of type rclcpp::SerializedMessage using the take_serialized() method. Note that the take_serialized() method needs reference type data as its first argument. Since msg is a pointer, *msg should be passed as the first argument to the `take_serialized().

    Note

    ROS 2's rclcpp supports both rclcpp::LoanedMessage and rclcpp::SerializedMessage. If zero copy communication via loaned messages is introduced to Autoware, take_loaned() method should be used for communication via loaned messages instead. In this document, the explanation of the take_loaned() method is omitted because it is not used for Autoware in this time (May. 2024).

    "},{"location":"contributing/coding-guidelines/ros-nodes/topic-message-handling/#2-obtain-multiple-data-stored-in-subscription-queue","title":"2. Obtain multiple data stored in Subscription Queue","text":"

    A subscription object can hold multiple messages in its queue if multiple queue size is configured with the QoS setting. The conventional manner using callback function forces a callback function to be executed per message. In other words, there is a constraint; a single cycle of callback function processes a single message . Note that with the conventional manner, if there are one or more messages in the subscription queue, the oldest one is taken and a thread is assigned to execute a callback function, which continues until the queue is empty. The take() method would alleviate this limitation. The take() method can be called in multiple iterations, so that a single cycle of the callback function processes multiple messages taken by take() methods.

    Here is a sample code, taken from ros2_subscription_examples/simple_examples/src/timer_batch_listener.cpp which calls the take() method in a single cycle of a callback function.

          std_msgs::msg::String msg;\nrclcpp::MessageInfo msg_info;\nwhile (sub_->take(msg, msg_info))\n{\nRCLCPP_INFO(this->get_logger(), \"Catch message\");\nRCLCPP_INFO(this->get_logger(), \"I heard: [%s]\", msg.data.c_str());\n

    In the code above, while(sub->take(msg, msg_info)) continues to take messages from the subscription queue until the queue is empty. Each message taken is processed per iteration. Note that you must determine size of a subscription queue by considering both frequency of a callback function and frequency of a message reception. For example, if a callback function is invoked at 10Hz and topic messages are received at 50Hz, the size of the subscription queue must be at least 5 to avoid losing received messages.

    Assigning a thread to execute a callback function per message will cause performance overhead. You can use the manner introduced in this section to avoid the unexpected overhead. The manner will be effective when there is a large difference between reception frequency and consumption frequency. For example, even if a message, such as a CAN message, is received at higher than 100 Hz, a user logic consumes messages at slower frequency such as 10 Hz. In such a case, the user logic should retrieve the required number of messages with the take() method to avoid the unexpected overhead.

    "},{"location":"contributing/coding-guidelines/ros-nodes/topic-message-handling/#3-obtain-data-by-calling-subscription-take-and-then-call-a-callback-function","title":"3. Obtain data by calling Subscription->take and then call a callback function","text":"

    You can combine the take() (strictly take_type_erased()) method and the callback function to process received messages in a consistent way. Using this combination does not require waking up a thread. Here is a sample code snippet from ros2_subscription_examples/simple_examples/src/timer_listener_using_callback.cpp.

          auto msg = sub_->create_message();\nrclcpp::MessageInfo msg_info;\nif (sub_->take_type_erased(msg.get(), msg_info)) {\nsub_->handle_message(msg, msg_info);\n

    In the code above, a message is taken by the take_type_erased() method before a registered callback function is called via the handle_message() method. Note that you must use take_type_erased() instead of take(). take_type_erased() needs void type data as its first argument. You must use the get() method to convert msg whose type is shared_ptr<void> to void type. Then the handle_message() method is called with the obtained message. A registered callback function is called within handle_message(). You don't need to take care of message type which is passed to take_type_erased() and handle_message(). You can define the message variable as auto msg = sub_->create_message();. You can also refer to the API document as for create_message(), take_type_erased() and handle_message().

    "},{"location":"contributing/coding-guidelines/ros-nodes/topic-message-handling/#4-obtain-data-by-a-callback-function","title":"4. Obtain data by a callback function","text":"

    A conventional manner, typically used in ROS 2 application, is a message reference using a callback function, is available. If you don't use a callback group with automatically_add_to_executor_with_node = false, a registered callback function will be called automatically by an executor when a topic message is received. One of the advantages of this manner is that you don't have to take care whether a topic message is passed through inter-process or intra-process. Remember that take() can only be used for inter-process communication via DDS, while another manner provided by rclcpp can be used for intra-process communication via rclcpp.

    "},{"location":"contributing/coding-guidelines/ros-nodes/topic-message-handling/#appendix","title":"Appendix","text":"

    A callback function is used to obtain a topic message in many of ROS 2 applications. It is as like a rule or a custom. As this document page explains, you can use the Subscription->take() method to obtain a topic message without calling a subscription callback function. This manner is also documented in Template Class Subscription \u2014 rclcpp 16.0.8 documentation.

    Many of ROS 2 users may be afraid to use the take() method because they may not be so familiar with it and there is a lack of documentation about take(), but it is widely used in the rclcpp::Executor implementation as shown in rclcpp/executor.cpp shown below. So it turns out that you are indirectly using the take() method, whether you know it or not.

        std::shared_ptr<void> message = subscription->create_message();\ntake_and_do_error_handling(\n\"taking a message from topic\",\nsubscription->get_topic_name(),\n[&]() {return subscription->take_type_erased(message.get(), message_info);},\n[&]() {subscription->handle_message(message, message_info);});\n

    Note

    Strictly speaking, the take_type_erased() method is called in the executor, but not the take() method.

    But take_type_erased() is the embodiment of take(), while take() internally calls take_type_erased().

    If rclcpp::Executor based object, an executor, is programmed to call a callback function, the executor itself determines when to do it. Because the executor is essentially calling a best-effort callback function, the message is not guaranteed to be necessarily referenced or processed even though it is received. Therefore it is desirable to call the take() method directly to ensure that a message is referenced or processed at the intended time.

    As of May 2024, the recommended manners are beginning to be used in Autoware Universe. See the following PR if you want an example in Autoware Universe.

    feat(tier4_autoware_utils, obstacle_cruise): change to read topic by polling #6702

    "},{"location":"contributing/coding-guidelines/ros-nodes/topic-message-handling/supp-intra-process-comm/","title":"[supplement] Obtain a received message through intra-process communication","text":""},{"location":"contributing/coding-guidelines/ros-nodes/topic-message-handling/supp-intra-process-comm/#supplement-obtain-a-received-message-through-intra-process-communication","title":"[supplement] Obtain a received message through intra-process communication","text":""},{"location":"contributing/coding-guidelines/ros-nodes/topic-message-handling/supp-intra-process-comm/#topic-message-handling-in-intra-process-communication","title":"Topic message handling in intra-process communication","text":"

    rclcpp supports intra-process communication. As explained in Topic message handling guideline, take() method can not be used in the case of intra-process communication. take() can not return a topic message which is received through inter-process communication. However, methods for intra-process communication are provided, similar to the methods for inter-process communication described in obtain data by calling Subscription->take and then call a callback function. take_data() method is provided to obtain a received data in the case of intra-process communication and the received data must be processed through execute() method. The return value of take_data() is based on the complicated data structure, execute() method should be used along with take_data() method. Refer to Template Class SubscriptionIntraProcess \u2014 rclcpp 16.0.8 documentation for take_data() and execute() for more detail.

    "},{"location":"contributing/coding-guidelines/ros-nodes/topic-message-handling/supp-intra-process-comm/#coding-manner","title":"Coding manner","text":"

    To handle messages via intra-process communication, call take_data() method and then execute() method as below.

    // Execute any entities of the Waitable that may be ready\nstd::shared_ptr<void> data = waitable.take_data();\nwaitable.execute(data);\n

    Here is a sample program in ros2_subscription_examples/intra_process_talker_listener/src/timer_listener_intra_process.cpp at main \u00b7 takam5f2/ros2_subscription_examples. You can run the program as below. If you set true to use_intra_process_comms, intra-process communication is performed, while if you set false, inter-process communication is performed.

    ros2 intra_process_talker_listener talker_listener_intra_process.launch.py use_intra_process_comms:=true\n

    Here is a snippet of ros2_subscription_examples/intra_process_talker_listener/src/timer_listener_intra_process.cpp at main \u00b7 takam5f2/ros2_subscription_examples.

          // check if intra-process communication is enabled.\nif (this->get_node_options().use_intra_process_comms()){\n\n// get the intra-process subscription's waitable.\nauto intra_process_sub = sub_->get_intra_process_waitable();\n\n// check if the waitable has data.\nif (intra_process_sub->is_ready(nullptr) == true) {\n\n// take the data and execute the callback.\nstd::shared_ptr<void> data = intra_process_sub->take_data();\n\nRCLCPP_INFO(this->get_logger(), \" Intra-process communication is performed.\");\n\n// execute the callback.\nintra_process_sub->execute(data);\n

    Below is a line-by-line explanation of the above code.

    • if (this->get_node_options().use_intra_process_comms()){

      • The statement checks whether or not intra-process communication is enabled or not by using NodeOptions
    • auto intra_process_sub = sub_->get_intra_process_waitable();

      • The statement means to get an embodied object which performs intra-process communication
    • if (intra_process_sub->is_ready(nullptr) == true) {

      • The statement checks if a message has already been received through intra-process communication
      • The argument of is_ready() is of type rcl_wait_set_t type, but because the argument is not used within is_ready(), nullptr is used for the moment.
        • Using nullptr is currently a workaround, as it has no intent.
    • std::shared_ptr<void> data = intra_process_sub->take_data();

      • This statement means to obtain a topic message from subscriptions for intra-process communication.
      • intra_process_sub->take_data() does not return a boolean value indicating whether a message is received successfully or not, so it is necessary to check this by calling is_ready() beforehand
    • intra_process_sub->execute(data);
      • A callback function corresponding to the received message is called within execute()
      • The callback function is executed by the thread that calls execute() without a context switch
    "},{"location":"contributing/coding-guidelines/ros-nodes/topic-message-handling/supp-wait_set/","title":"[supplement] Use rclcpp::WaitSet","text":""},{"location":"contributing/coding-guidelines/ros-nodes/topic-message-handling/supp-wait_set/#supplement-use-rclcppwaitset","title":"[supplement] Use rclcpp::WaitSet","text":""},{"location":"contributing/coding-guidelines/ros-nodes/topic-message-handling/supp-wait_set/#what-is-rclcppwaitset","title":"What is rclcpp::WaitSet","text":"

    As explained in call take() method of Subscription object, the take() method is irreversible. Once the take() method is executed, a state of a subscription object changes. Because there is no undo operation against the take() method, the subscription object can not be restored to its previous state. You can use the rclcpp::WaitSet before calling the take() to check the arrival of an incoming message in the subscription queue. The following sample code shows how the wait_set_.wait() tells you that a message has already been received and can be obtained by the take().

          auto wait_result = wait_set_.wait(std::chrono::milliseconds(0));\nif (wait_result.kind() == rclcpp::WaitResultKind::Ready &&\nwait_result.get_wait_set().get_rcl_wait_set().subscriptions[0]) {\nsub_->take(msg, msg_info);\nRCLCPP_INFO(this->get_logger(), \"Catch message\");\nRCLCPP_INFO(this->get_logger(), \"I heard: [%s]\", msg.data.c_str());\n

    A single rclcpp::WaitSet object is able to observe multiple subscription objects. If there are multiple subscriptions for different topics, you can check the arrival of incoming messages per subscription. Algorithms used in the field of autonomous robots requires multiple incoming messages, such as sensor data or actuation state. Using rclcpp::WaitSet for the multiple subscriptions, they are able to check whether or not required messages have arrived without taking any message.

          auto wait_result = wait_set_.wait(std::chrono::milliseconds(0));\nbool received_all_messages = false;\nif (wait_result.kind() == rclcpp::WaitResultKind::Ready) {\nfor (auto wait_set_subs : wait_result.get_wait_set().get_rcl_wait_set().subscriptions) {\nif (!wait_set_subs) {\nRCLCPP_INFO_THROTTLE(get_logger(), clock, 5000, \"Waiting for data...\");\nreturn {};\n}\n}\nreceived_all_mesages = true;\n}\n

    In the code above, unless rclcpp::WaitSet is used, it is impossible to verify the arrival of all needed messages without changing state of the subscription objects.

    "},{"location":"contributing/coding-guidelines/ros-nodes/topic-message-handling/supp-wait_set/#coding-manner","title":"Coding manner","text":"

    This section explains how to code using rclcpp::WaitSet with a sample code below.

    • ros2_subscription_examples/waitset_examples/src/talker_triple.cpp at main \u00b7 takam5f2/ros2_subscription_examples
      • It periodically publishes /chatter every second, /slower_chatter every two seconds, and /slowest_chatter every three seconds.
    • ros2_subscription_examples/waitset_examples/src/timer_listener_triple_async.cpp at main \u00b7 takam5f2/ros2_subscription_examples
      • It queries WaitSet per one second and if there is a message available, it obtains the message with take()
      • It has three subscriptions for /chatter /slower_chatter, and /slower_chatter

    The following three steps are required to use rclcpp::WaitSet.

    "},{"location":"contributing/coding-guidelines/ros-nodes/topic-message-handling/supp-wait_set/#1-declare-and-initialize-waitset","title":"1. Declare and initialize WaitSet","text":"

    You must first instantiate a rclcpp::WaitSet based object. Below is a snippet from ros2_subscription_examples/waitset_examples/src/timer_listener_triple_async.cpp at main \u00b7 takam5f2/ros2_subscription_examples.

    rclcpp::WaitSet wait_set_;\n
    Note

    There are several types of classes similar to the rclcpp::WaitSet. The rclcpp::WaitSet object can be configured during runtime. It is not thread-safe as explained in the API specification of rclcpp::WaitSet The thread-safe classes that are replacements for rclcpp::WaitSet' are provided by therclcpp' package as listed below.

    • Typedef rclcpp::ThreadSafeWaitSet
      • Subscription, timer, etc. can only be registered to ThreadSafeWaitSet only in thread-safe state
      • Sample code is here: examples/rclcpp/wait_set/src/thread_safe_wait_set.cpp at rolling \u00b7 ros2/examples
    • Typedef rclcpp::StaticWaitSet
      • Subscription, timer, etc. can be registered to rclcpp::StaticWaitSet only at initialization
      • Here are sample code:
        • ros2_subscription_examples/waitset_examples/src/timer_listener_twin_static.cpp at main \u00b7 takam5f2/ros2_subscription_examples
        • examples/rclcpp/wait_set/src/static_wait_set.cpp at rolling \u00b7 ros2/examples
    "},{"location":"contributing/coding-guidelines/ros-nodes/topic-message-handling/supp-wait_set/#2-register-trigger-subscription-timer-and-so-on-to-waitset","title":"2. Register trigger (Subscription, Timer, and so on) to WaitSet","text":"

    You need to register a trigger to the rclcpp::WaitSet based object. The following is a snippet from ros2_subscription_examples/waitset_examples/src/timer_listener_triple_async.cpp at main \u00b7 takam5f2/ros2_subscription_examples

        subscriptions_array_[0] = create_subscription<std_msgs::msg::String>(\"chatter\", qos, not_executed_callback, subscription_options);\nsubscriptions_array_[1] = create_subscription<std_msgs::msg::String>(\"slower_chatter\", qos, not_executed_callback, subscription_options);\nsubscriptions_array_[2] = create_subscription<std_msgs::msg::String>(\"slowest_chatter\", qos, not_executed_callback, subscription_options);\n\n// Add subscription to waitset\nfor (auto & subscription : subscriptions_array_) {\nwait_set_.add_subscription(subscription);\n}\n

    In the code above, the add_subscription() method registers the created subscriptions with the wait_set_ object. A rclcpp::WaitSet-based object basically handles objects each of which has a corresponding callback function. Not onlySubscriptionbased objects, but also Timer,ServiceorActionbased objects can be observed by a rclcpp::WaitSet based object. A singlerclcpp::WaitSet object accepts mixture of different types of objects. A sample code for registering timer triggers can be found here.

    wait_set_.add_timer(much_slower_timer_);\n

    A trigger can be registered at declaration and initialization as described in wait_set_topics_and_timer.cpp from the examples.

    "},{"location":"contributing/coding-guidelines/ros-nodes/topic-message-handling/supp-wait_set/#3-verify-waitset-result","title":"3. Verify WaitSet result","text":"

    The data structure of the test result returned from the rclcpp::WaitSet is nested. You can find the WaitSet result by the following 2 steps;

    1. Verify if any trigger has been invoked
    2. Verify if a specified trigger has been triggered

    For step 1, here is a sample code taken from ros2_subscription_examples/waitset_examples/src/timer_listener_triple_async.cpp at main \u00b7 takam5f2/ros2_subscription_examples.

          auto wait_result = wait_set_.wait(std::chrono::milliseconds(0));\nif (wait_result.kind() == rclcpp::WaitResultKind::Ready) {\nRCLCPP_INFO(this->get_logger(), \"wait_set tells that some subscription is ready\");\n} else {\nRCLCPP_INFO(this->get_logger(), \"wait_set tells that any subscription is not ready and return\");\nreturn;\n}\n

    In the code above, auto wait_result = wait_set_.wait(std::chrono::milliseconds(0)) tests if a trigger in wait_set_ has been called. The argument to thewait()method is the timeout duration. If it is greater than 0 milliseconds or seconds, this method will wait for a message to be received until the timeout expires. Ifwait_result.kind() == rclcpp::WaitResultKind::Readyistrue, then any trigger has been invoked.

    For step 2, here is a sample code taken from ros2_subscription_examples/waitset_examples/src/timer_listener_triple_async.cpp at main \u00b7 takam5f2/ros2_subscription_examples.

          for (size_t i = 0; i < subscriptions_num; i++) {\nif (wait_result.get_wait_set().get_rcl_wait_set().subscriptions[i]) {\nstd_msgs::msg::String msg;\nrclcpp::MessageInfo msg_info;\nif (subscriptions_array_[i]->take(msg, msg_info)) {\nRCLCPP_INFO(this->get_logger(), \"Catch message via subscription[%ld]\", i);\nRCLCPP_INFO(this->get_logger(), \"I heard: [%s]\", msg.data.c_str());\n

    In the code above, wait_result.get_wait_set().get_rcl_wait_set().subscriptions[i] indicates whether each individual trigger has been invoked or not. The result is stored in the subscriptions array. The order in the subscriptions array is the same as the order in which the triggers are registered.

    "},{"location":"contributing/discussion-guidelines/","title":"Discussion guidelines","text":""},{"location":"contributing/discussion-guidelines/#discussion-guidelines","title":"Discussion guidelines","text":"

    Warning

    Under Construction

    Refer to the following links for now:

    • https://docs.github.com/en/discussions/guides/best-practices-for-community-conversations-on-github
    • https://opensource.guide/how-to-contribute/#communicating-effectively
    "},{"location":"contributing/documentation-guidelines/","title":"Documentation guidelines","text":""},{"location":"contributing/documentation-guidelines/#documentation-guidelines","title":"Documentation guidelines","text":""},{"location":"contributing/documentation-guidelines/#workflow","title":"Workflow","text":"

    Contributions to Autoware's documentation are welcome, and the same principles described in the contribution guidelines should be followed. Small, limited changes can be made by forking this repository and submitting a pull request, but larger changes should be discussed with the community and Autoware maintainers via GitHub Discussion first.

    Examples of small changes include:

    • Fixing spelling or grammatical mistakes
    • Fixing broken links
    • Making an addition to an existing, well-defined page, such as the Troubleshooting guide.

    Examples of larger changes include:

    • Adding new pages with a large amount of detail, such as a tutorial
    • Re-organization of the existing documentation structure
    "},{"location":"contributing/documentation-guidelines/#style-guide","title":"Style guide","text":"

    You should refer to the Google developer documentation style guide as much as possible. Reading the Highlights page of that guide is recommended, but if not then the key points below should be noted.

    • Use standard American English spelling and punctuation.
    • Use sentence case for document titles and section headings.
    • Use descriptive link text.
    • Write short sentences that are easy to understand and translate.
    "},{"location":"contributing/documentation-guidelines/#tips","title":"Tips","text":""},{"location":"contributing/documentation-guidelines/#how-to-preview-your-modification","title":"How to preview your modification","text":"

    There are two ways to preview your modification on a documentation website.

    "},{"location":"contributing/documentation-guidelines/#1-using-github-actions-workflow","title":"1. Using GitHub Actions workflow","text":"

    Follow the steps below.

    1. Create a pull request to the repository.
    2. Add the deploy-docs label from the sidebar (See below figure).
    3. Wait for a couple of minutes, and the github-actions bot will notify the URL for the pull request's preview.

    "},{"location":"contributing/documentation-guidelines/#2-running-an-mkdocs-server-in-your-local-environment","title":"2. Running an MkDocs server in your local environment","text":"

    Instead of creating a PR, you can use the mkdocs command to build Autoware's documentation websites on your local computer. Assuming that you are using Ubuntu OS, run the following to install the required libraries.

    python3 -m pip install -U $(curl -fsSL https://raw.githubusercontent.com/autowarefoundation/autoware-github-actions/main/deploy-docs/mkdocs-requirements.txt)\n

    Then, run mkdocs serve on your documentation directory.

    cd /PATH/TO/YOUR-autoware-documentation\nmkdocs serve\n

    It will launch the MkDocs server. Access http://127.0.0.1:8000/ to see the preview of the website.

    "},{"location":"contributing/pull-request-guidelines/","title":"Pull request guidelines","text":""},{"location":"contributing/pull-request-guidelines/#pull-request-guidelines","title":"Pull request guidelines","text":""},{"location":"contributing/pull-request-guidelines/#general-pull-request-workflow","title":"General pull request workflow","text":"

    Autoware uses the fork-and-pull model. For more details about the model, refer to GitHub Docs.

    The following is a general example of the pull request workflow based on the fork-and-pull model. Use this workflow as a reference when you contribute to Autoware.

    1. Create an issue.
      • Discuss the approaches to the issue with maintainers.
      • Confirm the support guidelines before creating an issue.
      • Follow the discussion guidelines when you discuss with other contributors.
    2. Create a fork repository. (for the first time only)
    3. Write code in your fork repository according to the approach agreed upon in the issue.
      • Write the tests and documentation as appropriate.
      • Follow the coding guidelines guidelines when you write code.
      • Follow the Testing guidelines guidelines when you write tests.
      • Follow the Documentation guidelines guidelines when you write documentation.
      • Follow the commit guidelines when you commit your changes.
    4. Test the code.
      • It is recommended that you summarize the test results, because you will need to explain the test results in the later review process.
      • If you are not sure what tests should be done, discuss them with maintainers.
    5. Create a pull request.
      • Follow the pull request rules when you create a pull request.
    6. Wait for the pull request to be reviewed.
      • The reviewers will review your code following the review guidelines.
        • Not only the reviewers, but also the author is encouraged to understand the review guidelines.
      • If CI checks have failed, fix the errors.
      • Learn about code ownership from the code owners guidelines.
        • Check the code owners FAQ section if your pull request is not being reviewed.
    7. Address the review comments pointed out by the reviewers.
      • If you don't understand the meaning of a review comment, ask the reviewers until you understand it.
        • Fixing without understanding the reason is not recommended because the author should be responsible for the final content of their own pull request.
      • If you don't agree with a review comment, ask the reviewers for a rational reason.
        • The reviewers are obligated to make the author understand the meanings of each comment.
      • After you have done with the review comments, re-request a review to the reviewers and back to 6.
        • Avoid using force push as much as possible so reviewers only see the differences. More precisely, at least keep a commit history up to the point of review because GitHub Web UI such as the suggested change may require rebase to pass DCO CI.
      • If there are no more new review comments, the reviewers will approve the pull request and proceed to 8.
    8. Merge the pull request.
      • Anyone with write access can merge the pull request if there is no special request from maintainers.
        • The author is encouraged to merge the pull request to feel responsible for their own pull request.
        • If the author does not have write access, ask the reviewers or maintainers.
    "},{"location":"contributing/pull-request-guidelines/#pull-request-rules","title":"Pull request rules","text":""},{"location":"contributing/pull-request-guidelines/#use-an-appropriate-pull-request-template-required-non-automated","title":"Use an appropriate pull request template (required, non-automated)","text":""},{"location":"contributing/pull-request-guidelines/#rationale","title":"Rationale","text":"
    • The unified style of descriptions by templates can make reviews efficient.
    "},{"location":"contributing/pull-request-guidelines/#example","title":"Example","text":"

    There are two types of templates. Select one based on the following condition.

    1. Standard change:
      • Complexity:
        • New features or significant updates.
        • Requires deeper understanding of the codebase.
      • Impact:
        • Affects multiple parts of the system.
        • Basically includes minor features, bug fixes and performance improvement.
        • Needs testing before merging.
    2. Small change:
      • Complexity:
        • Documentation, simple refactoring, or style adjustments.
        • Easy to understand and review.
      • Impact:
        • Minimal effect on the system.
        • Quicker merge with less testing needed.
    "},{"location":"contributing/pull-request-guidelines/#steps-to-use-an-appropriate-pull-request-template","title":"Steps to use an appropriate pull request template","text":"
    1. Select the appropriate template, as shown in this video.
    2. Read the selected template carefully and fill the required content.
    3. Check the checkboxes during a review.
      • There are pre-review checklist and post-review checklist for the author.
    "},{"location":"contributing/pull-request-guidelines/#set-appropriate-reviewers-after-creating-a-pull-request-required-partially-automated","title":"Set appropriate reviewers after creating a pull request (required, partially automated)","text":""},{"location":"contributing/pull-request-guidelines/#rationale_1","title":"Rationale","text":"
    • Pull requests must be reviewed by appropriate reviewers to keep the quality of the codebase.
    "},{"location":"contributing/pull-request-guidelines/#example_1","title":"Example","text":"
    • For most ROS packages, reviewers will be automatically assigned based on the maintainer information in package.xml.
    • If no reviewer is assigned automatically, assign reviewers manually following the instructions in GitHub Docs.
      • You can find the reviewers by seeing the .github/CODEOWNERS file of the repository.
    • If you are not sure the appropriate reviewers, ask @autoware-maintainers.
    • If you have no rights to assign reviewers, mention reviewers instead.
    "},{"location":"contributing/pull-request-guidelines/#apply-conventional-commits-to-the-pull-request-title-required-automated","title":"Apply Conventional Commits to the pull request title (required, automated)","text":""},{"location":"contributing/pull-request-guidelines/#rationale_2","title":"Rationale","text":"
    • Conventional Commits can generate categorized changelogs, for example using git-cliff.
    "},{"location":"contributing/pull-request-guidelines/#example_2","title":"Example","text":"
    feat(trajectory_follower): add an awesome feature\n

    Note

    You have to start the description part (here add an awesome feature) with a lowercase.

    If your change breaks some interfaces, use the ! (breaking changes) mark as follows:

    feat(trajectory_follower)!: remove package\nfeat(trajectory_follower)!: change parameter names\nfeat(planning)!: change topic names\nfeat(autoware_utils)!: change function names\n

    For the repositories that contain code (most repositories), use the definition of conventional-commit-types for the type.

    For documentation repositories such as autoware-documentation, use the following definition:

    • feat
      • Add new pages.
      • Add contents to the existing pages.
    • fix
      • Fix the contents in the existing pages.
    • refactor
      • Move contents to different pages.
    • docs
      • Update documentation for the documentation repository itself.
    • build
      • Update the settings of the documentation site builder.
    • ! (breaking changes)
      • Remove pages.
      • Change the URL of pages.

    perf and test are generally unused. Other types have the same meaning as the code repositories.

    "},{"location":"contributing/pull-request-guidelines/#add-the-related-component-names-to-the-scope-of-conventional-commits-advisory-non-automated","title":"Add the related component names to the scope of Conventional Commits (advisory, non-automated)","text":""},{"location":"contributing/pull-request-guidelines/#rationale_3","title":"Rationale","text":"
    • It helps contributors find pull requests that are relevant to them.
    • It makes the changelog clearer.
    "},{"location":"contributing/pull-request-guidelines/#example_3","title":"Example","text":"

    For ROS packages, adding the package name or component name is good.

    feat(trajectory_follower): add an awesome feature\nrefactor(planning, control): use common utils\n
    "},{"location":"contributing/pull-request-guidelines/#keep-a-pull-request-small-advisory-non-automated","title":"Keep a pull request small (advisory, non-automated)","text":""},{"location":"contributing/pull-request-guidelines/#rationale_4","title":"Rationale","text":"
    • Small pull requests are easy to understand for reviewers.
    • Small pull requests are easy to revert for maintainers.
    "},{"location":"contributing/pull-request-guidelines/#exception","title":"Exception","text":"

    It is acceptable if it is agreed with maintainers that there is no other way but to submit a big pull request.

    "},{"location":"contributing/pull-request-guidelines/#example_4","title":"Example","text":"
    • Avoid developing two features in one pull request.
    • Avoid mixing different types (feat, fix, refactor, etc.) of changes in the same commit.
    "},{"location":"contributing/pull-request-guidelines/#remind-reviewers-if-there-is-no-response-for-more-than-a-week-advisory-non-automated","title":"Remind reviewers if there is no response for more than a week (advisory, non-automated)","text":""},{"location":"contributing/pull-request-guidelines/#rationale_5","title":"Rationale","text":"
    • It is the author's responsibility to care about their own pull request until it is merged.
    "},{"location":"contributing/pull-request-guidelines/#example_5","title":"Example","text":"
    @{some-of-developers} Would it be possible for you to review this PR?\n@autoware-maintainers friendly ping.\n
    "},{"location":"contributing/pull-request-guidelines/ai-pr-review/","title":"AI PR Review","text":""},{"location":"contributing/pull-request-guidelines/ai-pr-review/#ai-pr-review","title":"AI PR Review","text":"

    We have Codium-ai/pr-agent enabled for Autoware Universe repository.

    "},{"location":"contributing/pull-request-guidelines/ai-pr-review/#the-workflow","title":"The workflow","text":"

    Workflow: pr-agent.yaml

    "},{"location":"contributing/pull-request-guidelines/ai-pr-review/#additional-links-for-the-workflow-maintainers","title":"Additional links for the workflow maintainers","text":"
    • Available models list
    "},{"location":"contributing/pull-request-guidelines/ai-pr-review/#how-to-use","title":"How to use","text":"

    When you create the PR, or within the PR add the label tag:pr-agent.

    Wait until both PR-Agent jobs are completed successfully:

    • prevent-no-label-execution-pr-agent / prevent-no-label-execution
    • Run pr agent on every pull request, respond to user comments

    Warning

    If you add multiple labels at the same time, prevent-no-label-execution can get confused.

    For example, first add tag:pr-agent, wait until it is ready, then add tag:run-build-and-test-differential if you need it.

    Then you can pick one of the following commands:

    /review: Request a review of your Pull Request.\n/describe: Update the PR description based on the contents of the PR.\n/improve: Suggest code improvements.\n/ask Could you propose a better name for this parameter?: Ask a question about the PR or anything really.\n/update_changelog: Update the changelog based on the PR's contents.\n/add_docs: Generate docstring for new components introduced in the PR.\n/help: Get a list of all available PR-Agent tools and their descriptions.\n
    • Here is the official documentation.
    • Usage Guide

    To use it, drop a comment post within your PR like this.

    Within a minute, you should see \ud83d\udc40 reaction under your comment post.

    Then the bot will drop a response with reviews, description or an answer.

    Info

    Please drop a single PR-Agent related comment at a time.

    "},{"location":"contributing/pull-request-guidelines/ci-checks/","title":"CI checks","text":""},{"location":"contributing/pull-request-guidelines/ci-checks/#ci-checks","title":"CI checks","text":"

    Autoware has several checks for a pull request. The results are shown at the bottom of the pull request page as below.

    If the \u274c mark is shown, click the Details button and investigate the failure reason.

    If the Required mark is shown, you cannot merge the pull request unless you resolve the error. If not, it is optional, but preferably it should be fixed.

    The following sections explain about common CI checks in Autoware. Note that some repositories may have different settings.

    "},{"location":"contributing/pull-request-guidelines/ci-checks/#dco","title":"DCO","text":"

    The Developer Certificate of Origin (DCO) is a lightweight way for contributors to certify that they wrote or otherwise have the right to submit the code they are contributing to the project.

    This workflow checks whether the pull request fulfills DCO. You need to confirm the required items and commit with git commit -s.

    For more information, refer to the GitHub App page.

    "},{"location":"contributing/pull-request-guidelines/ci-checks/#semantic-pull-request","title":"semantic-pull-request","text":"

    This workflow checks whether the pull request follows Conventional Commits.

    For the detailed rules, see the pull request rules.

    "},{"location":"contributing/pull-request-guidelines/ci-checks/#pre-commit","title":"pre-commit","text":"

    pre-commit is a tool to run formatters or linters when you commit.

    This workflow checks whether the pull request has no error with pre-commit.

    In the workflow pre-commit.ci - pr is enabled in the repository, it will automatically fix errors by pre-commit.ci as many as possible. If there are some errors remain, fix them manually.

    You can run pre-commit in your local environment by the following command:

    pre-commit run -a\n

    Or you can install pre-commit to the repository and automatically run it before committing:

    pre-commit install\n

    Since it is difficult to detect errors with no false positives, some jobs are split into another config file and marked as optional. To check them, use the --config option:

    pre-commit run -a --config .pre-commit-config-optional.yaml\n
    "},{"location":"contributing/pull-request-guidelines/ci-checks/#spell-check-differential","title":"spell-check-differential","text":"

    This workflow detects spelling mistakes using CSpell with our dictionary file. Since it is difficult to detect errors with no false positives, it is an optional workflow, but it is preferable to remove spelling mistakes as many as possible.

    You have the following options if you need to use a word that is not registered in the dictionary.

    • If the word is only used in a few files, you can use inline document settings \"cspell:ignore\" to suppress the check.
    • If the word is widely used in the repository, you can create a local cspell json and pass it to the spell-check action.
    • If the word is common and may be used in many repositories, you can submit pull requests to tier4/autoware-spell-check-dict or tier4/cspell-dicts to update the dictionary.
    "},{"location":"contributing/pull-request-guidelines/ci-checks/#build-and-test-differential","title":"build-and-test-differential","text":"

    This workflow checks colcon build and colcon test for the pull request. To make the CI faster, it doesn't check all packages but only modified packages and the dependencies.

    "},{"location":"contributing/pull-request-guidelines/ci-checks/#build-and-test-differential-self-hosted","title":"build-and-test-differential-self-hosted","text":"

    This workflow is the ARM64 version of build-and-test-differential. You need to add the ARM64 label to run this workflow.

    For reference information, since ARM machines are not supported by GitHub-hosted runners, we use self-hosted runners prepared by the AWF. For the details about self-hosted runners, refer to GitHub Docs.

    "},{"location":"contributing/pull-request-guidelines/ci-checks/#deploy-docs","title":"deploy-docs","text":"

    This workflow deploys the preview documentation site for the pull request. You need to add the deploy-docs label to run this workflow.

    "},{"location":"contributing/pull-request-guidelines/code-owners/","title":"Code owners","text":""},{"location":"contributing/pull-request-guidelines/code-owners/#code-owners","title":"Code owners","text":"

    The Autoware project uses multiple CODEOWNERS files to specify owners throughout the repository. For a detailed understanding of code owners, visit the GitHub documentation on code owners.

    "},{"location":"contributing/pull-request-guidelines/code-owners/#purpose-and-function-of-the-codeowners-file","title":"Purpose and function of the CODEOWNERS file","text":"

    The CODEOWNERS file plays a vital role in managing pull requests (PRs) by:

    • Automating Review Requests: Automatically assigns PRs to responsible individuals or teams.
    • Enforcing Merge Approval: Prevents PR merging without approvals from designated code owners or repository maintainers, ensuring thorough review.
    • Maintaining Quality Control: Helps sustain code quality and consistency by requiring review from knowledgeable individuals or teams.
    "},{"location":"contributing/pull-request-guidelines/code-owners/#locating-codeowners-files","title":"Locating CODEOWNERS files","text":"

    CODEOWNERS files are found in the .github directory across multiple repositories of the Autoware project. The autoware.repos file lists these repositories and their directories.

    "},{"location":"contributing/pull-request-guidelines/code-owners/#maintenance-of-codeowners","title":"Maintenance of CODEOWNERS","text":"

    Generally, repository maintainers handle the updates to CODEOWNERS files. To propose changes, submit a PR to modify the file.

    "},{"location":"contributing/pull-request-guidelines/code-owners/#special-case-for-the-autoware-universe-repository","title":"Special case for the Autoware Universe repository","text":"

    In the autoware.universe repository, maintenance of the CODEOWNERS file is automated by the CI.

    This workflow updates the CODEOWNERS file based on the maintainer information in the package.xml files of the packages in the repository.

    In order to change the code owners for a package in the autoware.universe repository:

    1. Modify the maintainer information in the package.xml file via a PR.
    2. Once merged, the CI workflow runs at midnight UTC (or can be triggered manually by a maintainer) to update the CODEOWNERS file and create a PR.
    3. A maintainer then needs to merge the CI-generated PR to finalize the update.
      • Example Automated PR: chore: update CODEOWNERS #6866
    "},{"location":"contributing/pull-request-guidelines/code-owners/#responsibilities-of-code-owners","title":"Responsibilities of code owners","text":"

    Code owners should review assigned PRs promptly. If a PR remains unreviewed for over a week, maintainers may intervene to review and possibly merge it.

    "},{"location":"contributing/pull-request-guidelines/code-owners/#faq","title":"FAQ","text":""},{"location":"contributing/pull-request-guidelines/code-owners/#unreviewed-pull-requests","title":"Unreviewed pull requests","text":"

    If your PR hasn't been reviewed:

    • \ud83c\udff9 Directly Address Code Owners: Comment on the PR to alert the owners.
    • \u23f3 Follow Up After a Week: If unreviewed after a week, add a comment under the PR and tag the @autoware-maintainers.
    • \ud83d\udce2 Escalate if Necessary: If your requests continue to go unanswered, you may escalate the issue by posting a message in the Autoware Discord channel \ud83d\udea8. Remember, maintainers often juggle numerous responsibilities, so patience is appreciated\ud83d\ude47.
    "},{"location":"contributing/pull-request-guidelines/code-owners/#pr-author-is-the-only-code-owner","title":"PR author is the only code owner","text":"

    If you, as the only code owner, authored a PR:

    • Request a review by tagging @autoware-maintainers.
    • The maintainers will consider appointing additional maintainers to avoid such conflicts.
    "},{"location":"contributing/pull-request-guidelines/code-owners/#non-code-owners-reviewing-prs","title":"Non-code owners reviewing PRs","text":"

    Anyone can review a PR:

    • You can review any pull request and provide your feedback.
    • Your review might not be enough to merge the pull request, but it will help the code owners and maintainers to make a decision.
    • If you think the pull request is ready to merge, you can mention the code owners and maintainers in a comment on the pull request.
    "},{"location":"contributing/pull-request-guidelines/commit-guidelines/","title":"Commit guidelines","text":""},{"location":"contributing/pull-request-guidelines/commit-guidelines/#commit-guidelines","title":"Commit guidelines","text":""},{"location":"contributing/pull-request-guidelines/commit-guidelines/#branch-rules","title":"Branch rules","text":""},{"location":"contributing/pull-request-guidelines/commit-guidelines/#start-branch-names-with-the-corresponding-issue-numbers-advisory-non-automated","title":"Start branch names with the corresponding issue numbers (advisory, non-automated)","text":""},{"location":"contributing/pull-request-guidelines/commit-guidelines/#rationale","title":"Rationale","text":"
    • Developers can quickly find the corresponding issues.
    • It is helpful for tools.
    • It is consistent with GitHub's default behavior.
    "},{"location":"contributing/pull-request-guidelines/commit-guidelines/#exception","title":"Exception","text":"

    If there are no corresponding issues, you can ignore this rule.

    "},{"location":"contributing/pull-request-guidelines/commit-guidelines/#example","title":"Example","text":"
    123-add-feature\n
    "},{"location":"contributing/pull-request-guidelines/commit-guidelines/#reference","title":"Reference","text":"
    • GitHub Docs
    "},{"location":"contributing/pull-request-guidelines/commit-guidelines/#use-dash-case-for-the-separator-of-branch-names-advisory-non-automated","title":"Use dash-case for the separator of branch names (advisory, non-automated)","text":""},{"location":"contributing/pull-request-guidelines/commit-guidelines/#rationale_1","title":"Rationale","text":"
    • It is consistent with GitHub's default behavior.
    "},{"location":"contributing/pull-request-guidelines/commit-guidelines/#example_1","title":"Example","text":"
    123-add-feature\n
    "},{"location":"contributing/pull-request-guidelines/commit-guidelines/#reference_1","title":"Reference","text":"
    • GitHub Docs
    "},{"location":"contributing/pull-request-guidelines/commit-guidelines/#make-branch-names-descriptive-advisory-non-automated","title":"Make branch names descriptive (advisory, non-automated)","text":""},{"location":"contributing/pull-request-guidelines/commit-guidelines/#rationale_2","title":"Rationale","text":"
    • It can avoid conflicts of names.
    • Developers can understand the purpose of the branch.
    "},{"location":"contributing/pull-request-guidelines/commit-guidelines/#exception_1","title":"Exception","text":"

    If you have already submitted a pull request, you do not have to change the branch name because you need to re-create a pull request, which is noisy and a waste of time. Be careful from the next time.

    "},{"location":"contributing/pull-request-guidelines/commit-guidelines/#example_2","title":"Example","text":"

    Usually it is good to start with a verb.

    123-fix-memory-leak-of-trajectory-follower\n
    "},{"location":"contributing/pull-request-guidelines/commit-guidelines/#commit-rules","title":"Commit rules","text":""},{"location":"contributing/pull-request-guidelines/commit-guidelines/#sign-off-your-commits-required-automated","title":"Sign-off your commits (required, automated)","text":"

    Developers must certify that they wrote or otherwise have the right to submit the code they are contributing to the project.

    "},{"location":"contributing/pull-request-guidelines/commit-guidelines/#rationale_3","title":"Rationale","text":"

    If not, it will lead to complex license problems.

    "},{"location":"contributing/pull-request-guidelines/commit-guidelines/#example_3","title":"Example","text":"
    git commit -s\n
    feat: add a feature\n\nSigned-off-by: Autoware <autoware@example.com>\n
    "},{"location":"contributing/pull-request-guidelines/commit-guidelines/#reference_2","title":"Reference","text":"
    • GitHub Apps - DCO
    "},{"location":"contributing/pull-request-guidelines/review-guidelines/","title":"Review guidelines","text":""},{"location":"contributing/pull-request-guidelines/review-guidelines/#review-guidelines","title":"Review guidelines","text":"

    Warning

    Under Construction

    Refer to the following links for now:

    • https://google.github.io/eng-practices/review/
    • https://docs.gitlab.com/ee/development/code_review.html
    • https://www.swarmia.com/blog/a-complete-guide-to-code-reviews/
    • https://rewind.com/blog/best-practices-for-reviewing-pull-requests-in-github/
    "},{"location":"contributing/pull-request-guidelines/review-tips/","title":"Review tips","text":""},{"location":"contributing/pull-request-guidelines/review-tips/#review-tips","title":"Review tips","text":""},{"location":"contributing/pull-request-guidelines/review-tips/#toggle-annotations-or-review-comments-in-the-diff-view","title":"Toggle annotations or review comments in the diff view","text":"

    There might be some annotations or review comments in the diff view during your review.

    To toggle annotations, press the A key.

    Before:

    After:

    To toggle review comments, press the I key.

    For other keyboard shortcuts, refer to GitHub Docs.

    "},{"location":"contributing/pull-request-guidelines/review-tips/#view-code-in-the-web-based-visual-studio-code","title":"View code in the web-based Visual Studio Code","text":"

    You can open Visual Studio Code from your browser to view code in a rich UI. To use it, press the . key on any repository or pull request.

    For more detailed usage, refer to github/dev.

    "},{"location":"contributing/pull-request-guidelines/review-tips/#check-out-the-branch-of-a-pull-request-quickly","title":"Check out the branch of a pull request quickly","text":"

    If you want to check out the branch of a pull request, it's generally troublesome with the fork-and-pull model.

    # Copy the user name and the fork URL.\ngit remote add {user-name} {fork-url}\ngit checkout {user-name}/{branch-name}\ngit remote rm {user-name} # To clean up\n

    Instead, you can use GitHub CLI to simplify the steps, just run gh pr checkout {pr-number}.

    You can copy the command from the top right of the pull request page.

    "},{"location":"contributing/testing-guidelines/","title":"Testing guidelines","text":""},{"location":"contributing/testing-guidelines/#testing-guidelines","title":"Testing guidelines","text":""},{"location":"contributing/testing-guidelines/#unit-testing","title":"Unit testing","text":"

    Unit testing is a software testing method that tests individual units of source code to determine whether they satisfy the specification.

    For details, see the Unit testing guidelines.

    "},{"location":"contributing/testing-guidelines/#integration-testing","title":"Integration testing","text":"

    Integration testing combines and tests the individual software modules as a group, and is done after unit testing.

    While performing integration testing, the following subtypes of tests are written:

    1. Fault injection testing
    2. Back-to-back comparison between a model and code
    3. Requirements-based testing
    4. Anomaly detection during integration testing
    5. Random input testing

    For details, see the Integration testing guidelines.

    "},{"location":"contributing/testing-guidelines/integration-testing/","title":"Integration testing","text":""},{"location":"contributing/testing-guidelines/integration-testing/#integration-testing","title":"Integration testing","text":"

    An integration test is defined as the phase in software testing where individual software modules are combined and tested as a group. Integration tests occur after unit tests, and before validation tests.

    The input to an integration test is a set of independent modules that have been unit tested. The set of modules is tested against the defined integration test plan, and the output is a set of properly integrated software modules that is ready for system testing.

    "},{"location":"contributing/testing-guidelines/integration-testing/#value-of-integration-testing","title":"Value of integration testing","text":"

    Integration tests determine if independently developed software modules work correctly when the modules are connected to each other. In ROS 2, the software modules are called nodes. Testing a single node is a special type of integration test that is commonly referred to as component testing.

    Integration tests help to find the following types of errors:

    • Incompatible interactions between nodes, such as non-matching topics, different message types, or incompatible QoS settings.
    • Edge cases that were not touched by unit testing, such as a critical timing issue, network communication delays, disk I/O failures, and other such problems that can occur in production environments.
    • Issues that can occur while the system is under high CPU/memory load, such as malloc failures. This can be tested using tools like stress and udpreplay to test the performance of nodes with real data.

    With ROS 2, it is possible to program complex autonomous-driving applications with a large number of nodes. Therefore, a lot of effort has been made to provide an integration-test framework that helps developers test the interaction of ROS 2 nodes.

    "},{"location":"contributing/testing-guidelines/integration-testing/#integration-test-framework","title":"Integration-test framework","text":"

    A typical integration-test framework has three parts:

    1. A series of executables with arguments that work together and generate outputs.
    2. A series of expected outputs that should match the output of the executables.
    3. A launcher that starts the tests, compares the outputs to the expected outputs, and determines if the test passes.

    In Autoware, we use the launch_testing framework.

    "},{"location":"contributing/testing-guidelines/integration-testing/#smoke-tests","title":"Smoke tests","text":"

    Autoware has a dedicated API for smoke testing. To use this framework, in package.xml add:

    <test_depend>autoware_testing</test_depend>\n

    And in CMakeLists.txt add:

    if(BUILD_TESTING)\nfind_package(autoware_testing REQUIRED)\nadd_smoke_test(${PROJECT_NAME} ${NODE_NAME})\nendif()\n

    Doing so adds smoke tests that ensure that a node can be:

    1. Launched with a default parameter file.
    2. Terminated with a standard SIGTERM signal.

    For the full API documentation, refer to the package design page.

    Note

    This API is not suitable for all smoke test cases. It cannot be used when a specific file location (eg: for a map) is required to be passed to the node, or if some preparation needs to be conducted before node launch. In such cases use the manual solution from the component test section below.

    "},{"location":"contributing/testing-guidelines/integration-testing/#integration-test-with-a-single-node-component-test","title":"Integration test with a single node: component test","text":"

    The simplest scenario is a single node. In this case, the integration test is commonly referred to as a component test.

    To add a component test to an existing node, you can follow the example of the lanelet2_map_loader in the autoware_map_loader package (added in this PR).

    In package.xml, add:

    <test_depend>ros_testing</test_depend>\n

    In CMakeLists.txt, add or modify the BUILD_TESTING section:

    if(BUILD_TESTING)\nadd_ros_test(\ntest/lanelet2_map_loader_launch.test.py\nTIMEOUT \"30\"\n)\ninstall(DIRECTORY\ntest/data/\nDESTINATION share/${PROJECT_NAME}/test/data/\n)\nendif()\n

    In addition to the command add_ros_test, we also install any data that is required by the test using the install command.

    Note

    • The TIMEOUT argument is given in seconds; see the add_ros_test.cmake file for details.
    • The add_ros_test command will run the test in a unique ROS_DOMAIN_ID which avoids interference between tests running in parallel.

    To create a test, either read the launch_testing quick-start example, or follow the steps below.

    Taking test/lanelet2_map_loader_launch.test.py as an example, first dependencies are imported:

    import os\nimport unittest\n\nfrom ament_index_python import get_package_share_directory\nimport launch\nfrom launch import LaunchDescription\nfrom launch_ros.actions import Node\nimport launch_testing\nimport pytest\n

    Then a launch description is created to launch the node under test. Note that the test_map.osm file path is found and passed to the node, something that cannot be done with the smoke testing API:

    @pytest.mark.launch_test\ndef generate_test_description():\n\n    lanelet2_map_path = os.path.join(\n        get_package_share_directory(\"autoware_map_loader\"), \"test/data/test_map.osm\"\n    )\n\n    lanelet2_map_loader = Node(\n        package=\"autoware_map_loader\",\n        executable=\"autoware_lanelet2_map_loader\",\n        parameters=[{\"lanelet2_map_path\": lanelet2_map_path}],\n    )\n\n    context = {}\n\n    return (\n        LaunchDescription(\n            [\n                lanelet2_map_loader,\n                # Start test after 1s - gives time for the map_loader to finish initialization\n                launch.actions.TimerAction(\n                    period=1.0, actions=[launch_testing.actions.ReadyToTest()]\n                ),\n            ]\n        ),\n        context,\n    )\n

    Note

    • Since the node need time to process the input lanelet2 map, we use a TimerAction to delay the start of the test by 1s.
    • In the example above, the context is empty but it can be used to pass objects to the test cases.
    • You can find an example of using the context in the ROS 2 context_launch_test.py test example.

    Finally, a test is executed after the node executable has been shut down (post_shutdown_test). Here we ensure that the node was launched without error and exited cleanly.

    @launch_testing.post_shutdown_test()\nclass TestProcessOutput(unittest.TestCase):\n    def test_exit_code(self, proc_info):\n        # Check that process exits with code 0: no error\n        launch_testing.asserts.assertExitCodes(proc_info)\n
    "},{"location":"contributing/testing-guidelines/integration-testing/#running-the-test","title":"Running the test","text":"

    Continuing the example from above, first build your package:

    colcon build --packages-up-to autoware_map_loader\nsource install/setup.bash\n

    Then either execute the component test manually:

    ros2 test src/universe/autoware.universe/map/autoware_map_loader/test/lanelet2_map_loader_launch.test.py\n

    Or as part of testing the entire package:

    colcon test --packages-select autoware_map_loader\n

    Verify that the test is executed; e.g.

    $ colcon test-result --all --verbose\n...\nbuild/autoware_map_loader/test_results/autoware_map_loader/test_lanelet2_map_loader_launch.test.py.xunit.xml: 1 test, 0 errors, 0 failures, 0 skipped\n
    "},{"location":"contributing/testing-guidelines/integration-testing/#next-steps","title":"Next steps","text":"

    The simple test described in Integration test with a single node: component test can be extended in numerous directions, such as testing a node's output.

    "},{"location":"contributing/testing-guidelines/integration-testing/#testing-the-output-of-a-node","title":"Testing the output of a node","text":"

    To test while the node is running, create an active test by adding a subclass of Python's unittest.TestCase to *launch.test.py. Some boilerplate code is required to access output by creating a node and a subscription to a particular topic, e.g.

    import unittest\n\nclass TestRunningDataPublisher(unittest.TestCase):\n\n    @classmethod\n    def setUpClass(cls):\n        cls.context = Context()\n        rclpy.init(context=cls.context)\n        cls.node = rclpy.create_node(\"test_node\", context=cls.context)\n\n    @classmethod\n    def tearDownClass(cls):\n        rclpy.shutdown(context=cls.context)\n\n    def setUp(self):\n        self.msgs = []\n        sub = self.node.create_subscription(\n            msg_type=my_msg_type,\n            topic=\"/info_test\",\n            callback=self._msg_received\n        )\n        self.addCleanup(self.node.destroy_subscription, sub)\n\n    def _msg_received(self, msg):\n        # Callback for ROS 2 subscriber used in the test\n        self.msgs.append(msg)\n\n    def get_message(self):\n        startlen = len(self.msgs)\n\n        executor = rclpy.executors.SingleThreadedExecutor(context=self.context)\n        executor.add_node(self.node)\n\n        try:\n            # Try up to 60 s to receive messages\n            end_time = time.time() + 60.0\n            while time.time() < end_time:\n                executor.spin_once(timeout_sec=0.1)\n                if startlen != len(self.msgs):\n                    break\n\n            self.assertNotEqual(startlen, len(self.msgs))\n            return self.msgs[-1]\n        finally:\n            executor.remove_node(self.node)\n\n    def test_message_content():\n        msg = self.get_message()\n        self.assertEqual(msg, \"Hello, world\")\n
    "},{"location":"contributing/testing-guidelines/integration-testing/#references","title":"References","text":"
    • colcon is used to build and run tests.
    • launch testing launches nodes and runs tests.
    • Testing guidelines describes the different types of tests performed in Autoware and links to the corresponding guidelines.
    "},{"location":"contributing/testing-guidelines/unit-testing/","title":"Unit testing","text":""},{"location":"contributing/testing-guidelines/unit-testing/#unit-testing","title":"Unit testing","text":"

    Unit testing is the first phase of testing and is used to validate units of source code such as classes and functions. Typically, a unit of code is tested by validating its output for various inputs. Unit testing helps ensure that the code behaves as intended and prevents accidental changes of behavior.

    Autoware uses the ament_cmake framework to build and run tests. The same framework is also used to analyze the test results.

    ament_cmake provides several convenience functions to make it easy to register tests in a CMake-based package and to ensure that JUnit-compatible result files are generated. It currently supports a few different testing frameworks like pytest, gtest, and gmock.

    In order to prevent tests running in parallel from interfering with each other when publishing and subscribing to ROS topics, it is recommended to use commands from ament_cmake_ros to run tests in isolation.

    See below for an example of using ament_add_ros_isolated_gtest with colcon test. All other tests follow a similar pattern.

    "},{"location":"contributing/testing-guidelines/unit-testing/#create-a-unit-test-with-gtest","title":"Create a unit test with gtest","text":"

    In my_cool_pkg/test, create the gtest code file test_my_cool_pkg.cpp:

    #include \"gtest/gtest.h\"\n#include \"my_cool_pkg/my_cool_pkg.hpp\"\nTEST(TestMyCoolPkg, TestHello) {\nEXPECT_EQ(my_cool_pkg::print_hello(), 0);\n}\n

    In package.xml, add the following line:

    <test_depend>ament_cmake_ros</test_depend>\n

    Next add an entry under BUILD_TESTING in the CMakeLists.txt to compile the test source files:

    if(BUILD_TESTING)\n\nament_add_ros_isolated_gtest(test_my_cool_pkg test/test_my_cool_pkg.cpp)\ntarget_link_libraries(test_my_cool_pkg ${PROJECT_NAME})\ntarget_include_directories(test_my_cool_pkg PRIVATE src)  # For private headers.\n...\nendif()\n

    This automatically links the test with the default main function provided by gtest. The code under test is usually in a different CMake target (${PROJECT_NAME} in the example) and its shared object for linking needs to be added. If the test source files include private headers from the src directory, the directory needs to be added to the include path using target_include_directories() function.

    To register a new gtest item, wrap the test code with the macro TEST (). TEST () is a predefined macro that helps generate the final test code, and also registers a gtest item to be available for execution. The test case name should be in CamelCase, since gtest inserts an underscore between the fixture name and the class case name when creating the test executable.

    gtest/gtest.h also contains predefined macros of gtest like ASSERT_TRUE(condition), ASSERT_FALSE(condition), ASSERT_EQ(val1,val2), ASSERT_STREQ(str1,str2), EXPECT_EQ(), etc. ASSERT_* will abort the test if the condition is not satisfied, while EXPECT_* will mark the test as failed but continue on to the next test condition.

    Info

    More information about gtest and its features can be found in the gtest repo.

    In the demo CMakeLists.txt, ament_add_ros_isolated_gtest is a predefined macro in ament_cmake_ros that helps simplify adding gtest code. Details can be viewed in ament_add_gtest.cmake.

    "},{"location":"contributing/testing-guidelines/unit-testing/#build-test","title":"Build test","text":"

    By default, all necessary test files (ELF, CTestTestfile.cmake, etc.) are compiled by colcon:

    cd ~/workspace/\ncolcon build --packages-select my_cool_pkg\n

    Test files are generated under ~/workspace/build/my_cool_pkg.

    "},{"location":"contributing/testing-guidelines/unit-testing/#run-test","title":"Run test","text":"

    To run all tests for a specific package, call:

    $ colcon test --packages-select my_cool_pkg\n\nStarting >>> my_cool_pkg\nFinished <<< my_cool_pkg [7.80s]\n\nSummary: 1 package finished [9.27s]\n

    The test command output contains a brief report of all the test results.

    To get job-wise information of all executed tests, call:

    $ colcon test-result --all\n\nbuild/my_cool_pkg/test_results/my_cool_pkg/copyright.xunit.xml: 8 tests, 0 errors, 0 failures, 0 skipped\nbuild/my_cool_pkg/test_results/my_cool_pkg/cppcheck.xunit.xml: 6 tests, 0 errors, 0 failures, 0 skipped\nbuild/my_cool_pkg/test_results/my_cool_pkg/lint_cmake.xunit.xml: 1 test, 0 errors, 0 failures, 0 skipped\nbuild/my_cool_pkg/test_results/my_cool_pkg/my_cool_pkg_exe_integration_test.xunit.xml: 1 test, 0 errors, 0 failures, 0 skipped\nbuild/my_cool_pkg/test_results/my_cool_pkg/test_my_cool_pkg.gtest.xml: 1 test, 0 errors, 0 failures, 0 skipped\nbuild/my_cool_pkg/test_results/my_cool_pkg/xmllint.xunit.xml: 1 test, 0 errors, 0 failures, 0 skipped\n\nSummary: 18 tests, 0 errors, 0 failures, 0 skipped\n

    Look in the ~/workspace/log/test_<date>/<package_name> directory for all the raw test commands, std_out, and std_err. There is also the ~/workspace/log/latest_*/ directory containing symbolic links to the most recent package-level build and test output.

    To print the tests' details while the tests are being run, use the --event-handlers console_cohesion+ option to print the details directly to the console:

    $ colcon test --event-handlers console_cohesion+ --packages-select my_cool_pkg\n\n...\ntest 1\n    Start 1: test_my_cool_pkg\n\n1: Test command: /usr/bin/python3 \"-u\" \"~/workspace/install/share/ament_cmake_test/cmake/run_test.py\" \"~/workspace/build/my_cool_pkg/test_results/my_cool_pkg/test_my_cool_pkg.gtest.xml\" \"--package-name\" \"my_cool_pkg\" \"--output-file\" \"~/workspace/build/my_cool_pkg/ament_cmake_gtest/test_my_cool_pkg.txt\" \"--command\" \"~/workspace/build/my_cool_pkg/test_my_cool_pkg\" \"--gtest_output=xml:~/workspace/build/my_cool_pkg/test_results/my_cool_pkg/test_my_cool_pkg.gtest.xml\"\n1: Test timeout computed to be: 60\n1: -- run_test.py: invoking following command in '~/workspace/src/my_cool_pkg':\n1:  - ~/workspace/build/my_cool_pkg/test_my_cool_pkg --gtest_output=xml:~/workspace/build/my_cool_pkg/test_results/my_cool_pkg/test_my_cool_pkg.gtest.xml\n1: [==========] Running 1 test from 1 test case.\n1: [----------] Global test environment set-up.\n1: [----------] 1 test from test_my_cool_pkg\n1: [ RUN      ] test_my_cool_pkg.test_hello\n1: Hello World\n1: [       OK ] test_my_cool_pkg.test_hello (0 ms)\n1: [----------] 1 test from test_my_cool_pkg (0 ms total)\n1:\n1: [----------] Global test environment tear-down\n1: [==========] 1 test from 1 test case ran. (0 ms total)\n1: [  PASSED  ] 1 test.\n1: -- run_test.py: return code 0\n1: -- run_test.py: inject classname prefix into gtest result file '~/workspace/build/my_cool_pkg/test_results/my_cool_pkg/test_my_cool_pkg.gtest.xml'\n1: -- run_test.py: verify result file '~/workspace/build/my_cool_pkg/test_results/my_cool_pkg/test_my_cool_pkg.gtest.xml'\n1/5 Test #1: test_my_cool_pkg ...................   Passed    0.09 sec\n\n...\n\n100% tests passed, 0 tests failed out of 5\n\nLabel Time Summary:\ncopyright     =   0.49 sec*proc (1 test)\ncppcheck      =   0.20 sec*proc (1 test)\ngtest         =   0.05 sec*proc (1 test)\nlint_cmake    =   0.18 sec*proc (1 test)\nlinter        =   1.34 sec*proc (4 tests)\nxmllint       =   0.47 sec*proc (1 test)\n\nTotal Test time (real) =   7.91 sec\n...\n
    "},{"location":"contributing/testing-guidelines/unit-testing/#code-coverage","title":"Code coverage","text":"

    Loosely described, a code coverage metric is a measure of how much of the program code has been exercised (covered) during testing.

    In the Autoware repositories, Codecov is used to automatically calculate coverage of any open pull request.

    More details about the code coverage metrics can be found in the Codecov documentation.

    "},{"location":"datasets/","title":"Datasets","text":""},{"location":"datasets/#datasets","title":"Datasets","text":"

    Autoware partners provide datasets for testing and development. These datasets are available for download here.

    "},{"location":"datasets/#istanbul-open-dataset","title":"Istanbul Open Dataset","text":"

    The dataset is collected in the following route. Tunnels and bridges are annotated on the image. The included specific areas into the dataset are:

    • Galata Bridge (Small Bridge)
    • Eurasia Tunnel (Long Tunnel with High Elevation Changes)
    • 2nd Bosphorus Bridge (Long Bridge)
    • Kagithane-Bomonti Tunnel (Small Tunnel)
    • Viaducts, road junctions, highways, dense urban areas...

    "},{"location":"datasets/#leo-drive-mapping-kit-sensor-data","title":"Leo Drive - Mapping Kit Sensor Data","text":"

    This dataset contains data from the portable mapping kit used for general mapping purposes.

    The data contains data from the following sensors:

    • 1 x Applanix POS LVX GNSS/INS System
    • 1 x Hesai Pandar XT32 LiDAR

    For sensor calibrations, /tf_static topic is added.

    "},{"location":"datasets/#data-links","title":"Data Links","text":"
    • You can find the produced full point cloud map, corner feature point cloud map and surface feature point cloud map here:

      • https://drive.google.com/drive/folders/1_jiQod4lO6-V2NDEr3d-M3XF_Nqmc0Xf?usp=drive_link
      • Exported point clouds are exported via downsampling with 0.2 meters and 0.5 meters voxel grids.
    • You can find the ROS 2 bag which is collected simultaneously with the mapping data:

      • https://drive.google.com/drive/folders/17zXiBeYlM90gQ5hV6EAWaoBTnNFoVPML?usp=drive_link
      • Due to the simultaneous data collection, we can assume that the point cloud maps and GNSS/INS data are the ground truth data for this rosbag.
    • Additionally, you can find the raw data used for mapping at the below link:
      • https://drive.google.com/drive/folders/1HmWYkxF5XvVCR27R8W7ZqO7An4HlJ6lD?usp=drive_link
      • Point clouds are collected as PCAP and feature-matched GNSS/INS data exported to a txt file.
    "},{"location":"datasets/#localization-performance-evaluation-with-autoware","title":"Localization Performance Evaluation with Autoware","text":"

    The report of the performance evaluation of the current Autoware with the collected data can be found in the link below.

    The report documented at 2024-08-28.

    • https://github.com/orgs/autowarefoundation/discussions/5135
    "},{"location":"datasets/#topic-list","title":"Topic list","text":"

    For collecting the GNSS/INS data, this repository is used.

    For collecting the LiDAR data, nebula repository is used.

    Topic Name Message Type /applanix/lvx_client/autoware_orientation autoware_sensing_msgs/msg/GnssInsOrientationStamped /applanix/lvx_client/imu_raw sensor_msgs/msg/Imu /localization/twist_estimator/twist_with_covariance geometry_msgs/msg/TwistWithCovarianceStamped /applanix/lvx_client/odom nav_msgs/msg/Odometry /applanix/lvx_client/gnss/fix sensor_msgs/msg/NavSatFix /clock rosgraph_msgs/msg/Clock /pandar_points sensor_msgs/msg/PointCloud2 /tf_static tf2_msgs/msg/TFMessage"},{"location":"datasets/#message-explanations","title":"Message Explanations","text":"

    Used drivers for sensors give output in default ROS 2 message types and their own ROS 2 message types for additional information. Following topics are the default ROS 2 message types:

    • /applanix/lvx_client/imu_raw

      • Gives the output of INS system in ENU. Due to the 9-axis IMU, yaw value demonstrates the heading value of the sensor.
    • /applanix/lvx_client/twist_with_covariance

      • Gives the twist output of the sensor.
    • /applanix/lvx_client/odom

      • Gives the position and orientation of the sensor from the starting point of the ROS 2 driver. Implemented with GeographicLib::LocalCartesian.

        This topic is not related to the wheel odometry.

    • /applanix/lvx_client/gnss/fix

      • Gives the latitude, longitude and height values of the sensors.

        Ellipsoidal height of WGS84 ellipsoid is given as height value.

    • /pandar_points

      • Gives the point cloud from the LiDAR sensor.
    "},{"location":"datasets/#bus-odd-operational-design-domain-datasets","title":"Bus-ODD (Operational Design Domain) datasets","text":""},{"location":"datasets/#leo-drive-isuzu-sensor-data","title":"Leo Drive - ISUZU sensor data","text":"

    This dataset contains data from the Isuzu bus used in the Bus ODD project.

    The data contains data from following sensors:

    • 1 x VLP16
    • 2 x VLP32C
    • 1 x Applanix POS LV 120 GNSS/INS
    • 3 x Lucid Vision Triton 5.4MP cameras (left, right, front)
    • Vehicle status report

    It also contains /tf topic for static transformations between sensors.

    "},{"location":"datasets/#required-message-types","title":"Required message types","text":"

    The GNSS data is available in sensor_msgs/msg/NavSatFix message type.

    But also the Applanix raw messages are also included in applanix_msgs/msg/NavigationPerformanceGsof50 and applanix_msgs/msg/NavigationSolutionGsof49 message types. In order to be able to play back these messages, you need to build and source the applanix_msgs package.

    # Create a workspace and clone the repository\nmkdir -p ~/applanix_ws/src && cd \"$_\"\ngit clone https://github.com/autowarefoundation/applanix.git\ncd ..\n\n# Build the workspace\ncolcon build --symlink-install --packages-select applanix_msgs\n\n# Source the workspace\nsource ~/applanix_ws/install/setup.bash\n\n# Now you can play back the messages\n

    Also make sure to source Autoware Universe workspace too.

    "},{"location":"datasets/#download-instructions","title":"Download instructions","text":"
    # Install awscli\n$ sudo apt update && sudo apt install awscli -y\n\n# This will download the entire dataset to the current directory.\n# (About 10.9GB of data)\n$ aws s3 sync s3://autoware-files/collected_data/2022-08-22_leo_drive_isuzu_bags/ ./2022-08-22_leo_drive_isuzu_bags  --no-sign-request\n\n# Optionally,\n# If you instead want to download a single bag file, you can get a list of the available files with following:\n$ aws s3 ls s3://autoware-files/collected_data/2022-08-22_leo_drive_isuzu_bags/ --no-sign-request\n   PRE all-sensors-bag1_compressed/\n   PRE all-sensors-bag2_compressed/\n   PRE all-sensors-bag3_compressed/\n   PRE all-sensors-bag4_compressed/\n   PRE all-sensors-bag5_compressed/\n   PRE all-sensors-bag6_compressed/\n   PRE driving_20_kmh_2022_06_10-16_01_55_compressed/\n   PRE driving_30_kmh_2022_06_10-15_47_42_compressed/\n\n# Then you can download a single bag file with the following:\naws s3 sync s3://autoware-files/collected_data/2022-08-22_leo_drive_isuzu_bags/all-sensors-bag1_compressed/ ./all-sensors-bag1_compressed  --no-sign-request\n
    "},{"location":"datasets/#autocoreai-lidar-ros-2-bag-file-and-pcap","title":"AutoCore.ai - lidar ROS 2 bag file and pcap","text":"

    This dataset contains pcap files and ros2 bag files from Ouster OS1-64 Lidar. The pcap file and ros2 bag file is recorded in the same time with slight difference in duration.

    Click here to download (~553MB)

    Reference Issue

    "},{"location":"datasets/data-anonymization/","title":"Rosbag2 Anonymizer","text":""},{"location":"datasets/data-anonymization/#rosbag2-anonymizer","title":"Rosbag2 Anonymizer","text":""},{"location":"datasets/data-anonymization/#overview","title":"Overview","text":"

    Autoware provides a tool (autoware_rosbag2_anonymizer) to anonymize ROS 2 bag files. This tool is useful when you want to share your data with Autoware community but want to keep the privacy of the data.

    With this tool you can blur any object (faces, license plates, etc.) in your bag files, and you can get a new bag file with the blurred images.

    "},{"location":"datasets/data-anonymization/#installation","title":"Installation","text":""},{"location":"datasets/data-anonymization/#clone-the-repository","title":"Clone the repository","text":"
    git clone https://github.com/autowarefoundation/autoware_rosbag2_anonymizer.git\ncd autoware_rosbag2_anonymizer\n
    "},{"location":"datasets/data-anonymization/#download-the-pretrained-models","title":"Download the pretrained models","text":"
    wget https://dl.fbaipublicfiles.com/segment_anything/sam_vit_h_4b8939.pth\n\nwget https://huggingface.co/ShilongLiu/GroundingDINO/resolve/main/GroundingDINO_SwinB.cfg.py\nwget https://huggingface.co/ShilongLiu/GroundingDINO/resolve/main/groundingdino_swinb_cogcoor.pth\n\nwget https://github.com/autowarefoundation/autoware_rosbag2_anonymizer/releases/download/v0.0.0/yolov8x_anonymizer.pt\nwget https://github.com/autowarefoundation/autoware_rosbag2_anonymizer/releases/download/v0.0.0/yolo_config.yaml\n
    "},{"location":"datasets/data-anonymization/#install-ros-2-mcap-dependencies-if-you-will-use-mcap-files","title":"Install ROS 2 mcap dependencies if you will use mcap files","text":"

    Warning

    Be sure you have installed the ROS 2 on your system.

    sudo apt install ros-humble-rosbag2-storage-mcap\n
    "},{"location":"datasets/data-anonymization/#install-autoware_rosbag2_anonymizer-tool","title":"Install autoware_rosbag2_anonymizer tool","text":"

    Before installing the tool, you should update the pip package manager.

    python3 -m pip install pip -U\n

    Then, you can install the tool with the following command.

    python3 -m pip install .\n
    "},{"location":"datasets/data-anonymization/#configuration","title":"Configuration","text":"

    Define prompts in the validation.json file. The tool will use these prompts to detect objects. You can add your prompts as dictionaries under the prompts key. Each dictionary should have two keys:

    • prompt: The prompt that will be used to detect the object. This prompt will be blurred in the anonymization process.
    • should_inside: This is a list of prompts that object should be inside. If the object is not inside the prompts, the tool will not blur the object.
    {\n\"prompts\": [\n{\n\"prompt\": \"license plate\",\n\"should_inside\": [\"car\", \"bus\", \"...\"]\n},\n{\n\"prompt\": \"human face\",\n\"should_inside\": [\"person\", \"human body\", \"...\"]\n}\n]\n}\n

    You should set your configuration in the configuration files under config folder according to the usage. Following instructions will guide you to set each configuration file.

    • config/anonymize_with_unified_model.yaml
    rosbag:\ninput_bags_folder: \"/path/to/input_bag_folder\" # Path to the input folder which contains ROS 2 bag files\noutput_bags_folder: \"/path/to/output_folder\" # Path to the output ROS 2 bag folder\noutput_save_compressed_image: True # Save images as compressed images (True or False)\noutput_storage_id: \"sqlite3\" # Storage id for the output bag file (`sqlite3` or `mcap`)\n\ngrounding_dino:\nbox_threshold: 0.1 # Threshold for the bounding box (float)\ntext_threshold: 0.1 # Threshold for the text (float)\nnms_threshold: 0.1 # Threshold for the non-maximum suppression (float)\n\nopen_clip:\nscore_threshold: 0.7 # Validity threshold for the OpenCLIP model (float\n\nyolo:\nconfidence: 0.15 # Confidence threshold for the YOLOv8 model (float)\n\nbbox_validation:\niou_threshold: 0.9 # Threshold for the intersection over union (float), if the intersection over union is greater than this threshold, the object will be selected as inside the validation prompt\n\nblur:\nkernel_size: 31 # Kernel size for the Gaussian blur (int)\nsigma_x: 11 # Sigma x for the Gaussian blur (int)\n
    • config/yolo_create_dataset.yaml
    rosbag:\ninput_bags_folder: \"/path/to/input_bag_folder\" # Path to the input ROS 2 bag files folder\n\ndataset:\noutput_dataset_folder: \"/path/to/output/dataset\" # Path to the output dataset folder\noutput_dataset_subsample_coefficient: 25 # Subsample coefficient for the dataset (int)\n\ngrounding_dino:\nbox_threshold: 0.1 # Threshold for the bounding box (float)\ntext_threshold: 0.1 # Threshold for the text (float)\nnms_threshold: 0.1 # Threshold for the non-maximum suppression (float)\n\nopen_clip:\nscore_threshold: 0.7 # Validity threshold for the OpenCLIP model (float\n\nbbox_validation:\niou_threshold: 0.9 # Threshold for the intersection over union (float), if the intersection over union is greater than this threshold, the object will be selected as inside the validation prompt\n
    • config/yolo_train.yaml
    dataset:\ninput_dataset_yaml: \"path/to/data.yaml\" # Path to the config file of the dataset, which is created in the previous step\n\nyolo:\nepochs: 100 # Number of epochs for the YOLOv8 model (int)\nmodel: \"yolov8x.pt\" # Select the base model for YOLOv8 ('yolov8x.pt' 'yolov8l.pt', 'yolov8m.pt', 'yolov8n.pt')\n
    • config/yolo_anonymize.yaml
    rosbag:\ninput_bag_path: \"/path/to/input_bag/bag.mcap\" # Path to the input ROS 2 bag file with 'mcap' or 'sqlite3' extension\noutput_bag_path: \"/path/to/output_bag_file\" # Path to the output ROS 2 bag folder\noutput_save_compressed_image: True # Save images as compressed images (True or False)\noutput_storage_id: \"sqlite3\" # Storage id for the output bag file (`sqlite3` or `mcap`)\n\nyolo:\nmodel: \"path/to/yolo/model\" # Path to the trained YOLOv8 model file (`.pt` extension) (you can download the pre-trained model from releases)\nconfig_path: \"path/to/input/data.yaml\" # Path to the config file of the dataset, which is created in the previous step\nconfidence: 0.15 # Confidence threshold for the YOLOv8 model (float)\n\nblur:\nkernel_size: 31 # Kernel size for the Gaussian blur (int)\nsigma_x: 11 # Sigma x for the Gaussian blur (int)\n
    "},{"location":"datasets/data-anonymization/#usage","title":"Usage","text":"

    The tool provides two options to anonymize images in ROS 2 bag files.

    Warning

    If your ROS 2 bag file includes custom message types from Autoware or any other packages, you should source the their workspaces before running the tool.

    You can source Autoware workspace with the following command.

    source /path/to/your/workspace/install/setup.bash\n
    "},{"location":"datasets/data-anonymization/#option-1-anonymize-with-unified-model","title":"Option 1: Anonymize with Unified Model","text":"

    You should provide a single rosbag and tool anonymize images in rosbag with a unified model. The model is a combination of GroundingDINO, OpenCLIP, YOLOv8 and SegmentAnything. If you don't want to use pre-trained YOLOv8 model, you can follow the instructions in the second option to train your own YOLOv8 model.

    You should set your configuration in config/anonymize_with_unified_model.yaml file.

    python3 main.py config/anonymize_with_unified_model.yaml --anonymize_with_unified_model\n
    "},{"location":"datasets/data-anonymization/#option-2-anonymize-using-the-yolov8-model-trained-on-a-dataset-created-with-the-unified-model","title":"Option 2: Anonymize Using the YOLOv8 Model Trained on a Dataset Created with the Unified Model","text":""},{"location":"datasets/data-anonymization/#step-1-create-a-dataset","title":"Step 1: Create a Dataset","text":"

    Create an initial dataset with the unified model. You can provide multiple ROS 2 bag files to create a dataset. After running the following command, the tool will create a dataset in YOLO format.

    You should set your configuration in config/yolo_create_dataset.yaml file.

    python3 main.py config/yolo_create_dataset.yaml --yolo_create_dataset\n
    "},{"location":"datasets/data-anonymization/#step-2-manually-label-the-missing-labels","title":"Step 2: Manually Label the Missing Labels","text":"

    The dataset which is created in the first step has some missing labels. You should label the missing labels manually. You can use the following example tools to label the missing labels:

    • label-studio
    • Roboflow (You can use the free version)
    "},{"location":"datasets/data-anonymization/#step-3-split-the-dataset","title":"Step 3: Split the Dataset","text":"

    Split the dataset into training and validation sets. Give the path to the dataset folder which is created in the first step.

    autoware-rosbag2-anonymizer-split-dataset /path/to/dataset/folder\n
    "},{"location":"datasets/data-anonymization/#step-4-train-the-yolov8-model","title":"Step 4: Train the YOLOv8 Model","text":"

    Train the YOLOv8 model with the dataset which is created in the first step.

    You should set your configuration in config/yolo_train.yaml file.

    python3 main.py config/yolo_train.yaml --yolo_train\n
    "},{"location":"datasets/data-anonymization/#step-5-anonymize-images-in-ros-2-bag-files","title":"Step 5: Anonymize Images in ROS 2 Bag Files","text":"

    Anonymize images in ROS 2 bag files with the trained YOLOv8 model. If you want to anonymize your ROS 2 bag file with only YOLOv8 model, you should use following command. But we recommend to use the unified model for better results. You can follow the Option 1 for the unified model with the YOLOv8 model trained by you.

    You should set your configuration in config/yolo_anonymize.yaml file.

    python3 main.py config/yolo_anonymize.yaml --yolo_anonymize\n
    "},{"location":"datasets/data-anonymization/#troubleshooting","title":"Troubleshooting","text":"
    • Error 1: torch.OutOfMemoryError: CUDA out of memory
    torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 1024.00 MiB. GPU 0 has a total capacity of 10.87 GiB of which 1010.88 MiB is free. Including non-PyTorch memory, this process has 8.66 GiB memory in use. Of the allocated memory 8.21 GiB is allocated by PyTorch, and 266.44 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation.  See documentation for Memory Management  (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)\n

    This error occurs when the GPU memory is not enough to run the model. You can add the following environment variable to avoid this error.

    export PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True\n
    "},{"location":"datasets/data-anonymization/#share-your-anonymized-data","title":"Share Your Anonymized Data","text":"

    After anonymizing your data, you can share your anonymized data with the Autoware community. If you want to share your data with the Autoware community, you should create an issue and pull request to the Autoware Documentation repository.

    "},{"location":"datasets/data-anonymization/#citation","title":"Citation","text":"
    @article{liu2023grounding,\ntitle={Grounding dino: Marrying dino with grounded pre-training for open-set object detection},\nauthor={Liu, Shilong and Zeng, Zhaoyang and Ren, Tianhe and Li, Feng and Zhang, Hao and Yang, Jie and Li, Chunyuan and Yang, Jianwei and Su, Hang and Zhu, Jun and others},\njournal={arXiv preprint arXiv:2303.05499},\nyear={2023}\n}\n
    @article{kirillov2023segany,\ntitle={Segment Anything},\nauthor={Kirillov, Alexander and Mintun, Eric and Ravi, Nikhila and Mao, Hanzi and Rolland, Chloe and Gustafson, Laura and Xiao, Tete and Whitehead, Spencer and Berg, Alexander C. and Lo, Wan-Yen and Doll{\\'a}r, Piotr and Girshick, Ross},\njournal={arXiv:2304.02643},\nyear={2023}\n}\n
    @software{ilharco_gabriel_2021_5143773,\nauthor       = {Ilharco, Gabriel and\n                  Wortsman, Mitchell and\n                  Wightman, Ross and\n                  Gordon, Cade and\n                  Carlini, Nicholas and\n                  Taori, Rohan and\n                  Dave, Achal and\n                  Shankar, Vaishaal and\n                  Namkoong, Hongseok and\n                  Miller, John and\n                  Hajishirzi, Hannaneh and\n                  Farhadi, Ali and\n                  Schmidt, Ludwig},\ntitle        = {OpenCLIP},\nmonth        = jul,\nyear         = 2021,\nnote         = {If you use this software, please cite it as below.},\npublisher    = {Zenodo},\nversion      = {0.1},\ndoi          = {10.5281/zenodo.5143773},\nurl          = {https://doi.org/10.5281/zenodo.5143773}\n}\n
    "},{"location":"design/","title":"Autoware's Design","text":""},{"location":"design/#autowares-design","title":"Autoware's Design","text":""},{"location":"design/#architecture","title":"Architecture","text":"

    Core and Universe.

    Autoware provides the runtimes and technology components by open-source software. The runtimes are based on the Robot Operating System (ROS). The technology components are provided by contributors, which include, but are not limited to:

    • Sensing
      • Camera Component
      • LiDAR Component
      • RADAR Component
      • GNSS Component
    • Computing
      • Localization Component
      • Perception Component
      • Planning Component
      • Control Component
      • Logging Component
      • System Monitoring Component
    • Actuation
      • DBW Component
    • Tools
      • Simulator Component
      • Mapping Component
      • Remote Component
      • ML Component
      • Annotation Component
      • Calibration Component
    "},{"location":"design/#concern-assumption-and-limitation","title":"Concern, Assumption, and Limitation","text":"

    The downside of the microautonomy architecture is that the computational performance of end applications is sacrificed due to its data path overhead attributed to functional modularity. In other words, the trade-off characteristic of the microautonomy architecture exists between computational performance and functional modularity. This trade-off problem can be solved technically by introducing real-time capability. This is because autonomous driving systems are not really designed to be real-fast, that is, low-latency computing is nice-to-have but not must-have. The must-have feature for autonomous driving systems is that the latency of computing is predictable, that is, the systems are real-time. As a whole, we can compromise computational performance to an extent that is predictable enough to meet the given timing constraints of autonomous driving systems, often referred to as deadlines of computation.

    "},{"location":"design/#design","title":"Design","text":"

    Warning

    Under Construction

    "},{"location":"design/#autoware-concepts","title":"Autoware concepts","text":"

    The Autoware concepts page describes the design philosophy of Autoware. Readers (service providers and all Autoware users) will learn the basic concepts underlying Autoware development, such as microautonomy and the Core/Universe architecture.

    "},{"location":"design/#autoware-architecture","title":"Autoware architecture","text":"

    The Autoware architecture page describes an overview of each module that makes up Autoware. Readers (all Autoware users) will gain a high-level picture of how each module that composes Autoware works.

    "},{"location":"design/#autoware-interfaces","title":"Autoware interfaces","text":"

    The Autoware interfaces page describes in detail the interface of each module that makes up Autoware. Readers (intermediate developers) will learn how to add new functionality to Autoware and how to integrate their own modules with Autoware.

    "},{"location":"design/#configuration-management","title":"Configuration management","text":""},{"location":"design/#conclusion","title":"Conclusion","text":""},{"location":"design/autoware-architecture/","title":"Architecture overview","text":""},{"location":"design/autoware-architecture/#architecture-overview","title":"Architecture overview","text":"

    This page describes the architecture of Autoware.

    "},{"location":"design/autoware-architecture/#introduction","title":"Introduction","text":"

    The current Autoware is defined to be a layered architecture that clarifies each module's role and simplifies the interface between them. By doing so:

    • Autoware's internal processing becomes more transparent.
    • Collaborative development is made easier because of the reduced interdependency between modules.
    • Users can easily replace an existing module (e.g. localization) with their own software component by simply wrapping their software to fit in with Autoware's interface.

    Note that the initial focus of this architecture design was solely on driving capability, and so the following features were left as future work:

    • Fail safe
    • Human Machine Interface
    • Real-time processing
    • Redundant system
    • State monitoring system
    "},{"location":"design/autoware-architecture/#high-level-architecture-design","title":"High-level architecture design","text":"

    Autoware's architecture consists of the following six stacks. Each linked page contains a more detailed set of requirements and use cases specific to that stack:

    • Sensing design
    • Map design
    • Localization design
    • Perception design
    • Planning design
    • Control design
    • Vehicle Interface design
    "},{"location":"design/autoware-architecture/#node-diagram","title":"Node diagram","text":"

    A diagram showing Autoware's nodes in the default configuration can be found on the Node diagram page. Detailed documents for each node are available in the Autoware Universe docs.

    Note that Autoware configurations are scalable / selectable and will vary depending on the environment and required use cases.

    "},{"location":"design/autoware-architecture/#references","title":"References","text":"
    • The architecture presentation given to the AWF Technical Steering Committee, March 2020
    "},{"location":"design/autoware-architecture/control/","title":"Control component design","text":""},{"location":"design/autoware-architecture/control/#control-component-design","title":"Control component design","text":""},{"location":"design/autoware-architecture/control/#abstract","title":"Abstract","text":"

    This document presents the design concept of the Control Component. The content is as follows:

    • Autoware Control Design
      • Outlining the policy for Autoware's control, which deals with only general information for autonomous driving systems and provides generic control commands to the vehicle.
    • Vehicle Adaptation Design
      • Describing the policy for vehicle adaptation, which utilizes adapter mechanisms to standardize the characteristics of the vehicle's drive system and integrate it with Autoware.
    • Control Feature Design
      • Demonstrating the features provided by Autoware's control.
      • Presenting the approach towards the functions installed in the vehicle such as ABS.
    "},{"location":"design/autoware-architecture/control/#autoware-control-design","title":"Autoware Control Design","text":"

    The Control Component generates the control signal to which the Vehicle Component subscribes. The generated control signals are computed based on the reference trajectories from the Planning Component.

    The Control Component consists of two modules. The trajectory_follower module generates a vehicle control command to follow the reference trajectory received from the planning module. The command includes, for example, the desired steering angle and target speed. The vehicle_command_gate is responsible for filtering the control command to prevent abnormal values and then sending it to the vehicle. This gate also allows switching between multiple sources such as the MRM (minimal risk maneuver) module or some remote control module, in addition to the trajectory follower.

    The Autoware control system is designed as a platform for automated driving systems that can be compatible with a diverse range of vehicles.

    The control process in Autoware uses general information (such as target acceleration and deceleration) and no vehicle-specific information (such as brake pressure) is used. Hence it can be adjusted independently of the vehicle's drive interface enabling easy integration or performance tuning.

    Furthermore, significant differences that affect vehicle motion constraints, such as two-wheel steering or four-wheel steering, are addressed by switching the control vehicle model, achieving control specialized for each characteristic.

    Autoware's control module outputs the necessary information to control the vehicle as a substitute for a human driver. For example, the control command from the control module looks like the following:

    - Target steering angle\n- Target steering torque\n- Target speed\n- Target acceleration\n

    Note that vehicle-specific values such as pedal positions and low-level information such as individual wheel rotation speeds are excluded from the command.

    "},{"location":"design/autoware-architecture/control/#vehicle-adaptation-design","title":"Vehicle Adaptation Design","text":""},{"location":"design/autoware-architecture/control/#vehicle-interface-adapter","title":"Vehicle interface adapter","text":"

    Autoware is designed to be an autonomous driving platform able to accommodate vehicles with various drivetrain types.

    This is an explanation of how Autoware handles the standardization of systems with different vehicle drivetrain. The interfaces for vehicle drivetrain are diverse, including steering angle, steering angular velocity, steering torque, speed, accel/brake pedals, and brake pressure. To accommodate these differences, Autoware adds an adapter module between the control component and the vehicle interface. This module performs the conversion between the proprietary message types used by the vehicle (such as brake pressure) and the generic types used by Autoware (such as desired acceleration). By providing this conversion information, the differences in vehicle drivetrain can be accommodated.

    If the information is not known in advance, an automatic calibration tool can be used. Calibration will occur within limited degrees of freedom, generating the information necessary for the drivetrain conversion automatically.

    This configuration is summarized in the following diagram.

    "},{"location":"design/autoware-architecture/control/#examples-of-several-vehicle-interfaces","title":"Examples of several vehicle interfaces","text":"

    This is an example of the several drivetrain types in the vehicle interface.

    Vehicle Lateral interface Longitudinal interface Note Lexus Steering angle Accel/brake pedal position Acceleration lookup table conversion for longitudinal JPN TAXI Steering angle Accel/brake pedal position Acceleration lookup table conversion for longitudinal GSM8 Steering EPS voltage Acceleration motor voltage, Deceleration brake hydraulic pressure lookup table and PID conversion for lateral and longitudinal YMC Golfcart Steering angle Velocity Logiee yaw rate Velocity F1 TENTH Steering angle Motor RPM interface code"},{"location":"design/autoware-architecture/control/#control-feature-design","title":"Control Feature Design","text":"

    The following lists the features provided by Autoware's Control/Vehicle component, as well as the conditions and assumptions required to utilize them effectively.

    The proper operation of the ODD is limited by factors such as whether the functions are enabled, delay time, calibration accuracy and degradation rate, and sensor accuracy.

    Feature Description\u3000 Requirements/Assumptions Note \u3000Limitation for now Lateral Control Control the drivetrain system related to lateral vehicle motion Trying to increase the number of vehicle types that can be supported in the future. Only front-steering type is supported. Longitudinal Control Control the drivetrain system related to longitudinal vehicle motion Slope Compensation Supports precise vehicle motion control on slopes Gradient information can be obtained from maps or sensors attached to the chassis If gradient information is not available, the gradient is estimated from the vehicle's pitch angle. Delay Compensation Controls the drivetrain system appropriately in the presence of time delays The drivetrain delay information is provided in advance If there is no delay information, the drivetrain delay is estimated automatically (automatic calibration). However, the effect of delay cannot be completely eliminated, especially in scenarios with sudden changes in speed. Only fixed delay times can be set for longitudinal and lateral drivetrain systems separately. It does not accommodate different delay times for the accelerator and brake. Drivetrain IF Conversion (Lateral Control) Converts the drivetrain-specific information of the vehicle into the drivetrain information used by Autoware (e.g., target steering angular velocity \u2192 steering torque) The conversion information is provided in advance If there is no conversion information, the conversion map is estimated automatically (automatic calibration). The degree of freedom for conversion is limited (2D lookup table + PID FB). Drivetrain IF Conversion (Longitudinal Control) Converts the drivetrain-specific information of the vehicle into the drivetrain information used by Autoware (e.g., target acceleration \u2192 accelerator/brake pedal value) The conversion information is provided in advance If there is no conversion information, the conversion map is estimated automatically (automatic calibration). The degree of freedom for conversion is limited (2D lookup table + PID FB). Automatic Calibration Automatically estimates and applies values such as drivetrain IF conversion map and delay time. The drivetrain status can be obtained (must) Anomaly Detection Notifies when there is a discrepancy in the calibration or unexpected drivetrain behavior The drivetrain status can be obtained (must) Steering Zero Point Correction Corrects the midpoint of the steering to achieve appropriate steering control The drivetrain status can be obtained (must) Steering Deadzone Correction Corrects the deadzone of the steering to achieve appropriate steering control The steering deadzone parameter is provided in advance If the parameter is unknown, the deadzone parameter is estimated from driving information Not available now Steering Deadzone Estimation Dynamically estimates the steering deadzone from driving data Not available now Weight Compensation Performs appropriate vehicle control according to weight Weight information can be obtained from sensors If there is no weight sensor, estimate the weight from driving information. Currently not available Weight Estimation Dynamically estimates weight from driving data Currently not available

    The list above does not cover wheel control systems such as ABS commonly used in vehicles. Regarding these features, the following considerations are taken into account.

    "},{"location":"design/autoware-architecture/control/#integration-with-vehicle-side-functions","title":"Integration with vehicle-side functions","text":"

    ABS (Anti-lock Brake System) and ESC (Electric Stability Control) are two functions that may be pre-installed on a vehicle, directly impacting its controllability. The control modules of Autoware assume that both ABS and ESC are installed on the vehicle and their absence may cause unreliable controls depending on the target ODD. For example, with low-velocity driving in a controlled environment, these functions are not necessary.

    Also, note that this statement does not negate the development of ABS functionality in autonomous driving systems.

    "},{"location":"design/autoware-architecture/control/#autoware-capabilities-and-vehicle-requirements","title":"Autoware Capabilities and Vehicle Requirements","text":"

    As an alternative to human driving, autonomous driving systems essentially aim to handle tasks that humans can perform. This includes not only controlling the steering wheel, accel, and brake, but also automatically detecting issues such as poor brake response or a misaligned steering angle. However, this is a trade-off, as better vehicle performance will lead to superior system behavior, ultimately affecting the design of ODD.

    On the other hand, for tasks that are not typically anticipated or cannot be handled by a human driver, processing in the vehicle ECU is expected. Examples of such scenarios include cases where the brake response is clearly delayed or when the vehicle rotates due to a single-side tire slipping. These tasks are typically handled by ABS or ESC.

    "},{"location":"design/autoware-architecture/localization/","title":"Index","text":"

    LOCALIZATION COMPONENT DESIGN DOC

    "},{"location":"design/autoware-architecture/localization/#abstract","title":"Abstract","text":""},{"location":"design/autoware-architecture/localization/#1-requirements","title":"1. Requirements","text":"

    Localization aims to estimate vehicle pose, velocity, and acceleration.

    Goals:

    • Propose a system that can estimate vehicle pose, velocity, and acceleration for as long as possible.
    • Propose a system that can diagnose the stability of estimation and send a warning message to the error-monitoring system if the estimation result is unreliable.
    • Design a vehicle localization function that can work with various sensor configurations.

    Non-goals:

    • This design document does not aim to develop a localization system that
      • is infallible in all environments
      • works outside of the pre-defined ODD (Operational Design Domain)
      • has better performance than is required for autonomous driving
    "},{"location":"design/autoware-architecture/localization/#2-sensor-configuration-examples","title":"2. Sensor Configuration Examples","text":"

    This section shows example sensor configurations and their expected performances. Each sensor has its own advantages and disadvantages, but overall performance can be improved by fusing multiple sensors.

    "},{"location":"design/autoware-architecture/localization/#3d-lidar-pointcloud-map","title":"3D-LiDAR + PointCloud Map","text":""},{"location":"design/autoware-architecture/localization/#expected-situation","title":"Expected situation","text":"
    • The vehicle is located in a structure-rich environment, such as an urban area
    "},{"location":"design/autoware-architecture/localization/#situations-that-can-make-the-system-unstable","title":"Situations that can make the system unstable","text":"
    • The vehicle is placed in a structure-less environment, such as a rural landscape, highway, or tunnel
    • Environmental changes have occurred since the map was created, such as snow cover or the construction/destruction of buildings.
    • Surrounding objects are occluded
    • The car is surrounded by objects undetectable by LiDAR, e.g., glass windows, reflections, or absorption (dark objects)
    • The environment contains laser beams at the same frequency as the car's LiDAR sensor(s)
    "},{"location":"design/autoware-architecture/localization/#functionality","title":"Functionality","text":"
    • The system can estimate the vehicle location on the point cloud map with the error of ~10cm.
    • The system is operable at night.
    "},{"location":"design/autoware-architecture/localization/#3d-lidar-or-camera-vector-map","title":"3D-LiDAR or Camera + Vector Map","text":""},{"location":"design/autoware-architecture/localization/#expected-situation_1","title":"Expected situation","text":"
    • Road with clear white lines and loose curvatures, such as a highway or an ordinary local road.
    "},{"location":"design/autoware-architecture/localization/#situations-that-can-make-the-system-unstable_1","title":"Situations that can make the system unstable","text":"
    • White lines are scratchy or covered by rain or snow
    • Tight curvature such as intersections
    • Large reflection change of the road surface caused by rain or paint
    "},{"location":"design/autoware-architecture/localization/#functionalities","title":"Functionalities","text":"
    • Correct vehicle positions along the lateral direction.
    • Pose correction along the longitudinal can be inaccurate, but can be resolved by fusing with GNSS.
    "},{"location":"design/autoware-architecture/localization/#gnss","title":"GNSS","text":""},{"location":"design/autoware-architecture/localization/#expected-situation_2","title":"Expected situation","text":"
    • The vehicle is placed in an open environment with few to no surrounding objects, such as a rural landscape.
    "},{"location":"design/autoware-architecture/localization/#situation-that-can-make-the-system-unstable","title":"Situation that can make the system unstable","text":"
    • GNSS signals are blocked by surrounding objects, e.g., tunnels or buildings.
    "},{"location":"design/autoware-architecture/localization/#functionality_1","title":"Functionality","text":"
    • The system can estimate vehicle position in the world coordinate within an error of ~10m.
    • With a RKT-GNSS (Real Time Kinematic Global Navigation Satellite System) attached, the accuracy can be improved to ~10cm.
    • A system with this configuration can work without environment maps (both point cloud and vector map types).
    "},{"location":"design/autoware-architecture/localization/#camera-visual-odometry-visual-slam","title":"Camera (Visual Odometry, Visual SLAM)","text":""},{"location":"design/autoware-architecture/localization/#expected-situation_3","title":"Expected situation","text":"
    • The vehicle is placed in an environment with rich visual features, such as an urban area.
    "},{"location":"design/autoware-architecture/localization/#situations-that-can-make-the-system-unstable_2","title":"Situations that can make the system unstable","text":"
    • The vehicle is placed in a texture-less environment.
    • The vehicle is surrounded by other objects.
    • The camera observes significant illumination changes, such as those caused by sunshine, headlights from other vehicles or when approaching the exit of a tunnel.
    • The vehicle is placed in a dark environment.
    "},{"location":"design/autoware-architecture/localization/#functionality_2","title":"Functionality","text":"
    • The system can estimate odometry by tracking visual features.
    "},{"location":"design/autoware-architecture/localization/#wheel-speed-sensor","title":"Wheel speed sensor","text":""},{"location":"design/autoware-architecture/localization/#expected-situation_4","title":"Expected situation","text":"
    • The vehicle is running on a flat and smooth road.
    "},{"location":"design/autoware-architecture/localization/#situations-that-can-make-the-system-unstable_3","title":"Situations that can make the system unstable","text":"
    • The vehicle is running on a slippery or bumpy road, which can cause incorrect observations of wheel speed.
    "},{"location":"design/autoware-architecture/localization/#functionality_3","title":"Functionality","text":"
    • The system can acquire the vehicle velocity and estimate distance traveled.
    "},{"location":"design/autoware-architecture/localization/#imu","title":"IMU","text":""},{"location":"design/autoware-architecture/localization/#expected-environments","title":"Expected environments","text":"
    • Flat, smooth roads
    "},{"location":"design/autoware-architecture/localization/#situations-that-can-make-the-system-unstable_4","title":"Situations that can make the system unstable","text":"
    • IMUs have a bias1 that is dependent on the surrounding temperature, and can cause incorrect sensor observation or odometry drift.
    "},{"location":"design/autoware-architecture/localization/#functionality_4","title":"Functionality","text":"
    • The system can observe acceleration and angular velocity.
    • By integrating these observations, the system can estimate the local pose change and realize dead-reckoning
    "},{"location":"design/autoware-architecture/localization/#geomagnetic-sensor","title":"Geomagnetic sensor","text":""},{"location":"design/autoware-architecture/localization/#expected-situation_5","title":"Expected situation","text":"
    • The vehicle is placed in an environment with low magnetic noise
    "},{"location":"design/autoware-architecture/localization/#situations-that-can-make-the-system-unstable_5","title":"Situations that can make the system unstable","text":"
    • The vehicle is placed in an environment with high magnetic noise, such as one containing buildings or structures with reinforced steel or other materials that generate electromagnetic waves.
    "},{"location":"design/autoware-architecture/localization/#functionality_5","title":"Functionality","text":"
    • The system can estimate the vehicle's direction in the world coordinate system.
    "},{"location":"design/autoware-architecture/localization/#magnetic-markers","title":"Magnetic markers","text":""},{"location":"design/autoware-architecture/localization/#expected-situation_6","title":"Expected situation","text":"
    • The car is placed in an environment with magnetic markers installed.
    "},{"location":"design/autoware-architecture/localization/#situations-where-the-system-becomes-unstable","title":"Situations where the system becomes unstable","text":"
    • The markers are not maintained.
    "},{"location":"design/autoware-architecture/localization/#functionality_6","title":"Functionality","text":"
    • Vehicle location can be obtained on the world coordinate by detecting the magnetic markers.
    • The system can work even if the road is covered with snow.
    "},{"location":"design/autoware-architecture/localization/#3-requirements","title":"3. Requirements","text":"
    • By implementing different modules, various sensor configurations and algorithms can be used.
    • The localization system can start pose estimation from an ambiguous initial location.
    • The system can produce a reliable initial location estimation.
    • The system can manage the state of the initial location estimation (uninitialized, initializable, or non-initializable) and can report to the error monitor.
    "},{"location":"design/autoware-architecture/localization/#4-architecture","title":"4. Architecture","text":""},{"location":"design/autoware-architecture/localization/#abstract_1","title":"Abstract","text":"

    Two architectures are defined, \"Required\" and \"Recommended\". However, the \"Required\" architecture only contains the inputs and outputs necessary to accept various localization algorithms. To improve the reusability of each module, the required components are defined in the \"Recommended\" architecture section along with a more detailed explanation.

    "},{"location":"design/autoware-architecture/localization/#required-architecture","title":"Required Architecture","text":""},{"location":"design/autoware-architecture/localization/#input","title":"Input","text":"
    • Sensor message
      • e.g., LiDAR, camera, GNSS, IMU, CAN Bus, etc.
      • Data types should be ROS primitives for reusability
    • Map data
      • e.g., point cloud map, lanelet2 map, feature map, etc.
      • The map format should be chosen based on use case and sensor configuration
      • Note that map data is not required for some specific cases (e.g., GNSS-only localization)
    • tf, static_tf
      • map frame
      • base_link frame
    "},{"location":"design/autoware-architecture/localization/#output","title":"Output","text":"
    • Pose with covariance stamped
      • Vehicle pose, covariance, and timestamp on the map coordinate
      • 50Hz~ frequency (depending on the requirements of the Planning and Control components)
    • Twist with covariance stamped
      • Vehicle velocity, covariance, and timestamp on the base_link coordinate
      • 50Hz~ frequency
    • Accel with covariance stamped
      • Acceleration, covariance, and timestamp on the base_link coordinate
      • 50Hz~ frequency
    • Diagnostics
      • Diagnostics information that indicates if the localization module works properly
    • tf
      • tf of map to base_link
    "},{"location":"design/autoware-architecture/localization/#recommended-architecture","title":"Recommended Architecture","text":""},{"location":"design/autoware-architecture/localization/#pose-estimator","title":"Pose Estimator","text":"
    • Estimates the vehicle pose on the map coordinate by matching external sensor observation to the map
    • Provides the obtained pose and its covariance to PoseTwistFusionFilter
    "},{"location":"design/autoware-architecture/localization/#twist-accel-estimator","title":"Twist-Accel Estimator","text":"
    • Produces the vehicle velocity, angular velocity, acceleration, angular acceleration, and their covariances
      • It is possible to create a single module for both twist and acceleration or to create two separate modules - the choice of architecture is up to the developer
    • The twist estimator produces velocity and angular velocity from internal sensor observation
    • The accel estimator produces acceleration and angular acceleration from internal sensor observations
    "},{"location":"design/autoware-architecture/localization/#kinematics-fusion-filter","title":"Kinematics Fusion Filter","text":"
    • Produces the likeliest pose, velocity, acceleration, and their covariances, computed by fusing two kinds of information:
      • The pose obtained from the pose estimator.
      • The velocity and acceleration obtained from the twist-accel estimator
    • Produces tf of map to base_link according to the pose estimation result
    "},{"location":"design/autoware-architecture/localization/#localization-diagnostics","title":"Localization Diagnostics","text":"
    • Monitors and guarantees the stability and reliability of pose estimation by fusing information obtained from multiple localization modules
    • Reports error status to the error monitor
    "},{"location":"design/autoware-architecture/localization/#tf-tree","title":"TF tree","text":"frame meaning earth ECEF (Earth Centered Earth Fixed\uff09 map Origin of the map coordinate (ex. MGRS origin) viewer User-defined frame for rviz base_link Reference pose of the ego-vehicle (projection of the rear-axle center onto the ground surface) sensor Reference pose of each sensor

    Developers can optionally add other frames such as odom or base_footprint as long as the tf structure above is maintained.

    "},{"location":"design/autoware-architecture/localization/#the-localization-modules-ideal-functionality","title":"The localization module's ideal functionality","text":"
    • The localization module should provide pose, velocity, and acceleration for control, planning, and perception.
    • Latency and stagger should be sufficiently small or adjustable such that the estimated values can be used for control within the ODD (Operational Design Domain).
    • The localization module should produce the pose on a fixed coordinate frame.
    • Sensors should be independent of each other so that they can be easily replaced.
    • The localization module should provide a status indicating whether or not the autonomous vehicle can operate with the self-contained function or map information.
    • Tools or manuals should describe how to set proper parameters for the localization module
    • Valid calibration parameters should be provided to align different frame or pose coordinates and sensor timestamps.
    "},{"location":"design/autoware-architecture/localization/#kpi","title":"KPI","text":"

    To maintain sufficient pose estimation performance for safe operation, the following metrics are considered:

    • Safety
      • The distance traveled within the ODD where pose estimation met the required accuracy, divided by the overall distance traveled within the ODD, as a percentage.
      • The anomaly detection rate for situations where the localization module cannot estimate pose within the ODD
      • The accuracy of detecting when the vehicle goes outside of the ODD, as a percentage.
    • Computational load
    • Latency
    "},{"location":"design/autoware-architecture/localization/#5-interface-and-data-structure","title":"5. Interface and Data Structure","text":""},{"location":"design/autoware-architecture/localization/#6-concerns-assumptions-and-limitations","title":"6. Concerns, Assumptions, and Limitations","text":""},{"location":"design/autoware-architecture/localization/#prerequisites-of-sensors-and-inputs","title":"Prerequisites of sensors and inputs","text":""},{"location":"design/autoware-architecture/localization/#sensor-prerequisites","title":"Sensor prerequisites","text":"
    • Input data is not defective.
      • Internal sensor observation such as IMU continuously keeps the proper frequency.
    • Input data has correct and exact time stamps.
      • Estimated poses can be inaccurate or unstable if the timestamps are not exact.
    • Sensors are correctly mounted with exact positioning and accessible from TF.
      • If the sensor positions are inaccurate, estimation results may be incorrect or unstable.
      • A sensor calibration framework is required to properly obtain the sensor positions.
    "},{"location":"design/autoware-architecture/localization/#map-prerequisites","title":"Map prerequisites","text":"
    • Sufficient information is contained within the map.
      • Pose estimation might be unstable if there is insufficient information in the map.
      • A testing framework is necessary to check if the map has adequate information for pose estimation.
    • Map does not differ greatly from the actual environment.
      • Pose estimation might be unstable if the actual environment has different objects from the map.
      • Maps need updates according to new objects and seasonal changes.
    • Maps must be aligned to a uniform coordinate, or an alignment framework is in place.
      • If multiple maps with different coordinate systems are used, the misalignment between them can affect the localization performance.
    "},{"location":"design/autoware-architecture/localization/#computational-resources","title":"Computational resources","text":"
    • Sufficient computational resources should be provided to maintain accuracy and computation speed.
    1. For more details about bias, refer to the VectorNav IMU specifications page.\u00a0\u21a9

    "},{"location":"design/autoware-architecture/map/","title":"Map component design","text":""},{"location":"design/autoware-architecture/map/#map-component-design","title":"Map component design","text":""},{"location":"design/autoware-architecture/map/#1-overview","title":"1. Overview","text":"

    Autoware relies on high-definition point cloud maps and vector maps of the driving environment to perform various tasks such as localization, route planning, traffic light detection, and predicting the trajectories of pedestrians and other vehicles.

    This document describes the design of map component of Autoware, including its requirements, architecture design, features, data formats, and interface to distribute map information to the rest of autonomous driving stack.

    "},{"location":"design/autoware-architecture/map/#2-requirements","title":"2. Requirements","text":"

    Map should provide two types of information to the rest of the stack:

    • Semantic information about roads as a vector map
    • Geometric information about the environment as a point cloud map (optional)

    A vector map contains highly accurate information about a road network, lane geometry, and traffic lights. It is required for route planning, traffic light detection, and predicting the trajectories of other vehicles and pedestrians.

    A 3D point cloud map is primarily used for LiDAR-based localization and part of perception in Autoware. In order to determine the current position and orientation of the vehicle, a live scan captured from one or more LiDAR units is matched against a pre-generated 3D point cloud map. Therefore, an accurate point cloud map is crucial for good localization results. However, if the vehicle has an alternate localization method with enough accuracy, for example using camera-based localization, point cloud map may not be required to use Autoware.

    In addition to above two types of maps, Autoware also requires a supplemental file for specifying the coordinate system of the map in geodetic system.

    "},{"location":"design/autoware-architecture/map/#3-architecture","title":"3. Architecture","text":"

    This diagram describes the high-level architecture of Map component in Autoware.

    The Map component consists of the following sub-components:

    • Point Cloud Map Loading: Load and publish point cloud map
    • Vector Map Loading: Load and publish vector map
    • Projection Loading: Load and publish projection information for conversion between local coordinate (x, y, z) and geodetic coordinate (latitude, longitude, altitude)
    "},{"location":"design/autoware-architecture/map/#4-component-interface","title":"4. Component interface","text":""},{"location":"design/autoware-architecture/map/#input-to-the-map-component","title":"Input to the map component","text":"
    • From file system
      • Point cloud map and its metadata file
      • Vector map
      • Projection information
    "},{"location":"design/autoware-architecture/map/#output-from-the-map-component","title":"Output from the map component","text":"
    • To Sensing
      • Projection information: Used to convert GNSS data from geodetic coordinate system to local coordinate system
    • To Localization
      • Point cloud map: Used for LiDAR-based localization
      • Vector map: Used for localization methods based on road markings, etc
    • To Perception
      • Point cloud map: Used for obstacle segmentation by comparing LiDAR and point cloud map
      • Vector map: Used for vehicle trajectory prediction
    • To Planning
      • Vector map: Used for behavior planning
    • To API layer
      • Projection information: Used to convert localization results from local coordinate system to geodetic coordinate system
    "},{"location":"design/autoware-architecture/map/#5-map-specification","title":"5. Map Specification","text":""},{"location":"design/autoware-architecture/map/#point-cloud-map","title":"Point Cloud Map","text":"

    The point cloud map must be supplied as a file with the following requirements:

    • The point cloud map must be projected on the same coordinate defined in map_projection_loader in order to be consistent with the lanelet2 map and other packages that converts between local and geodetic coordinates. For more information, please refer to the readme of map_projection_loader.
    • It must be in the PCD (Point Cloud Data) file format, but can be a single PCD file or divided into multiple PCD files.
    • Each point in the map must contain X, Y, and Z coordinates.
    • An intensity or RGB value for each point may be optionally included.
    • It must cover the entire operational area of the vehicle. It is also recommended to include an additional buffer zone according to the detection range of sensors attached to the vehicle.
    • Its resolution should be at least 0.2 m to yield reliable localization results.
    • It can be in either local or global coordinates, but must be in global coordinates (georeferenced) to use GNSS data for localization.

    For more details on divided map format, please refer to the readme of map_loader in Autoware Universe.

    Note

    Three global coordinate systems are currently supported by Autoware, including Military Grid Reference System (MGRS), Universal Transverse Mercator (UTM), and Japan Rectangular Coordinate System. However, MGRS is a preferred coordinate system for georeferenced maps. In a map with MGRS coordinate system, the X and Y coordinates of each point represent the point's location within the 100,000-meter square, while the Z coordinate represents the point's elevation.

    "},{"location":"design/autoware-architecture/map/#vector-map","title":"Vector Map","text":"

    The vector cloud map must be supplied as a file with the following requirements:

    • It must be in Lanelet2 format, with additional modifications required by Autoware.
    • It must contain the shape and position information of lanes, traffic lights, stop lines, crosswalks, parking spaces, and parking lots.
    • Except at the beginning or end of a road, each lanelet in the map must be correctly connected to its predecessor, successors, left neighbor, and right neighbor.
    • Each lanelet in the map must contain traffic rule information including its speed limit, right of way, traffic direction, associated traffic lights, stop lines, and traffic signs.
    • It must cover the entire operational area of the vehicle.

    For detailed specifications on Vector Map creation, please refer to Vector Map Creation Requirement Specification document.

    "},{"location":"design/autoware-architecture/map/#projection-information","title":"Projection Information","text":"

    The projection information must be supplied as a file with the following requirements:

    • It must be in YAML format, provided into map_projection_loader in current Autoware Universe implementation.
    • The file must contain the following information:
      • The name of the projection method used to convert between local and global coordinates
      • The parameters of the projection method (depending on the projection method)

    For further information, please refer to the readme of map_projection_loader in Autoware Universe.

    "},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/","title":"Vector Map creation requirement specifications","text":""},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/#vector-map-creation-requirement-specifications","title":"Vector Map creation requirement specifications","text":""},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/#overview","title":"Overview","text":"

    Autoware relies on high-definition point cloud maps and vector maps of the driving environment to perform various tasks such as localization, route planning, traffic light detection, and predicting the trajectories of pedestrians and other vehicles.

    A vector map contains highly accurate information about a road network, lane geometry, and traffic lights. It is required for route planning, traffic light detection, and predicting the trajectories of other vehicles and pedestrians.

    Vector Map uses lanelet2_extension, which is based on the lanelet2 format and extended for Autoware.

    The primitives (basic components) used in Vector Map are explained in Web.Auto Docs - What is Lanelet2. The following Vector Map creation requirement specifications are written on the premise of these knowledge.

    This specification is a set of requirements for the creation of Vector Map(s) to ensure that Autoware drives safely and autonomously as intended by the user. To Create a Lanelet2 format .osm file, please refer to Creating a vector map.

    "},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/#handling-of-the-requirement-specification","title":"Handling of the Requirement Specification","text":"

    Which requirements apply entirely depends on the configuration of the Autoware system on a vehicle. Before creating a Vector Map, it is necessary to clearly determine in advance how you want the vehicle with the implemented system to behave in various environments.

    Next, you must comply with the laws of the country where the autonomous driving vehicle will be operating. It is your responsibility to choose which of the following requirements to apply according to the laws.

    "},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/#caution","title":"Caution","text":"
    • The examples of the road signs and road surface markings are used in Japan. Please replace them with those used in your respective countries.
    • The values for range and distance indicated are minimum values. Please determine values that comply with the laws of your country. Furthermore, these minimum values may change depending on the maximum velocity of the autonomous driving vehicle.
    "},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/#list-of-requirement-specifications","title":"List of Requirement Specifications","text":"Category ID Requirements Category Lane vm-01-01 Lanelet basics vm-01-02 Allowance for lane changes vm-01-03 Linestring sharing vm-01-04 Sharing of the centerline of lanes for opposing traffic vm-01-05 Lane geometry vm-01-06 Line position (1) vm-01-07 Line position (2) vm-01-08 Line position (3) vm-01-09 Speed limits vm-01-10 Centerline vm-01-11 Centerline connection (1) vm-01-12 Centerline connection (2) vm-01-13 Roads with no centerline (1) vm-01-14 Roads with no centerline (2) vm-01-15 Road shoulder vm-01-16 Road shoulder Linestring sharing vm-01-17 Side strip vm-01-18 Side strip Linestring sharing vm-01-19 Walkway Category Stop Line vm-02-01 Stop line alignment vm-02-02 Stop sign Category Intersection vm-03-01 Intersection criteria vm-03-02 Lanelet's turn direction and virtual vm-03-03 Lanelet width in the intersection vm-03-04 Lanelet creation in the intersection vm-03-05 Lanelet division in the intersection vm-03-06 Guide lines in the intersection vm-03-07 Multiple lanelets in the intersection vm-03-08 Intersection Area range vm-03-09 Range of Lanelet in the intersection vm-03-10 Right of way (with signal) vm-03-11 Right of way (without signal) vm-03-12 Right of way supplements vm-03-13 Merging from private area, sidewalk vm-03-14 Road marking vm-03-15 Exclusive bicycle lane Category Traffic Light vm-04-01 Traffic light basics vm-04-02 Traffic light position and size vm-04-03 Traffic light lamps Category Crosswalk vm-05-01 Crosswalks across the road vm-05-02 Crosswalks with pedestrian signals vm-05-03 Deceleration for safety at crosswalks vm-05-04 Fences Category Area vm-06-01 Buffer Zone vm-06-02 No parking signs vm-06-03 No stopping signs vm-06-04 No stopping sections vm-06-05 Detection area Category Others vm-07-01 Vector Map creation range vm-07-02 Range of detecting pedestrians who enter the road vm-07-03 Guardrails, guard pipes, fences vm-07-04 Ellipsoidal height"},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_area/","title":"Category area","text":""},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_area/#categoryarea","title":"Category:Area","text":""},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_area/#vm-06-01-buffer-zone","title":"vm-06-01 Buffer Zone","text":""},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_area/#detail-of-requirements","title":"Detail of requirements","text":"

    Create a Polygon (type:hatched_road_markings) when a Buffer Zone (also known as a zebra zone) is painted on the road surface.

    • If the Buffer Zone is next to a Lanelet, share Points between them.
    • Overlap the Buffer Zone's Polygon with the intersection's Polygon (intersection_area) if the Buffer Zone is located at an intersection.
    "},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_area/#behavior-of-autoware","title":"Behavior of Autoware\uff1a","text":"

    In order to avoid obstacles, Autoware regards the Buffer Zone as a drivable area and proceeds through it.

    "},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_area/#caution","title":"Caution","text":"
    • Vehicles are not allowed to pass through safety areas. It's important to differentiate between Buffer Zones and safety areas. - Do not create a Polygon for the Buffer Zone in areas where static objects like poles are present and vehicles cannot pass, even if a Buffer Zone is painted on the surface. Buffer Zones should be established only in areas where vehicle passage is feasible.
    "},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_area/#preferred-vector-map","title":"Preferred vector map","text":""},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_area/#incorrect-vector-map","title":"Incorrect vector map","text":"

    None in particular.

    "},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_area/#related-autoware-module","title":"Related Autoware module","text":"
    • Static Avoidance - Autoware Universe Documentation
    • Dynamic Avoidance - Autoware Universe Documentation
    "},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_area/#vm-06-02-no-parking-signs","title":"vm-06-02 No parking signs","text":""},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_area/#detail-of-requirements_1","title":"Detail of requirements","text":"

    When creating a Vector Map, you can prohibit parking in specific areas, while temporary stops are permitted.

    Create a reference from a Lanelet (subtype:road) to a Regulatory Element (subtype:no_parking_area), and have this Regulatory Element refer to a Polygon (type:no_parking_area).

    Refer to Web.Auto Documentation - Creation of No Parking Area for the method of creation in Vector Map Builder.

    "},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_area/#behavior-of-autoware_1","title":"Behavior of Autoware\uff1a","text":"

    Since no_parking_area does not allow for setting a goal, Autoware cannot park the vehicle there.

    "},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_area/#preferred-vector-map_1","title":"Preferred vector map","text":""},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_area/#incorrect-vector-map_1","title":"Incorrect vector map","text":"

    None in particular.

    "},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_area/#related-autoware-module_1","title":"Related Autoware module","text":"
    • Goal Planner design - Autoware Universe Documentation
    "},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_area/#vm-06-03-no-stopping-signs","title":"vm-06-03 No stopping signs","text":""},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_area/#detail-of-requirements_2","title":"Detail of requirements","text":"

    When creating a Vector Map, you can prohibit stopping in specific areas, while temporary stops are permitted.

    Create a reference from a Lanelet (subtype:road) to a Regulatory Element (subtype:no_parking_area), and have this Regulatory Element refer to a Polygon (type:no_parking_area).

    Refer to Web.Auto Documentation - Creation of No Parking Area for the method of creation in Vector Map Builder.

    "},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_area/#behavior-of-autoware_2","title":"Behavior of Autoware\uff1a","text":"

    Since no_parking_area does not allow for setting a goal, Autoware cannot park the vehicle there.

    "},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_area/#preferred-vector-map_2","title":"Preferred vector map","text":""},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_area/#incorrect-vector-map_2","title":"Incorrect vector map","text":"

    None in particular.

    "},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_area/#related-autoware-module_2","title":"Related Autoware module","text":"
    • Goal Planner design - Autoware Universe Documentation
    "},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_area/#vm-06-04-no-stopping-sections","title":"vm-06-04 No stopping sections","text":""},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_area/#detail-of-requirements_3","title":"Detail of requirements","text":"

    While vehicles may stop on the road for signals or traffic congestion, you can prohibit any form of stopping (temporary stopping, parking, idling) in specific areas when creating a Vector Map.

    Create a reference from a Lanelet (subtype:road) to a Regulatory Element (subtype:no_stopping_area), and have this Regulatory Element refer to a Polygon (type:no_stopping_area).

    Refer to Web.Auto Documentation - Creation of No Stopping Area for the method of creation in Vector Map Builder.

    "},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_area/#behavior-of-autoware_3","title":"Behavior of Autoware\uff1a","text":"

    The vehicle does not make temporary stops in no_stopping_area. Since goals cannot be set in no_stopping_area, the vehicle cannot park there.

    "},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_area/#preferred-vector-map_3","title":"Preferred vector map","text":""},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_area/#incorrect-vector-map_3","title":"Incorrect vector map","text":"

    None in particular.

    "},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_area/#related-autoware-module_3","title":"Related Autoware module","text":"
    • No Stopping Area design - Autoware Universe Documentation
    "},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_area/#vm-06-05-detection-area","title":"vm-06-05 Detection area","text":""},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_area/#detail-of-requirements_4","title":"Detail of requirements","text":"

    Autoware identifies obstacles by detecting point clouds in the Detection Area, leading to a stop at the stop line and maintaining that stop until the obstacles move away. To enable this response, incorporate the Detection Area element into the Vector Map.

    Create a reference from a Lanelet (subtype:road) to a Regulatory Element (subtype:detection_area), and have this Regulatory Element refer to a Polygon (type:detection_area) and a Linestring (type:stop_line).

    Refer to Web.Auto Documentation - Creation of Detection Area for the method of creation in Vector Map Builder.

    "},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_area/#preferred-vector-map_4","title":"Preferred vector map","text":""},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_area/#incorrect-vector-map_4","title":"Incorrect vector map","text":"

    None in particular.

    "},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_area/#related-autoware-module_4","title":"Related Autoware module","text":"
    • Detection Area - Autoware Universe Documentation
    "},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_crosswalk/","title":"Category crosswalk","text":""},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_crosswalk/#categorycrosswalk","title":"Category:Crosswalk","text":"

    There are two types of requirements for crosswalks, and they can both be applicable to a single crosswalk.

    • vm-05-01 : Crosswalks across the road
    • vm-05-02 : Crosswalks with pedestrian signals

    In the case of crosswalks at intersections, they must also meet the requirements of the intersection.

    "},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_crosswalk/#vm-05-01-crosswalks-across-the-road","title":"vm-05-01 Crosswalks across the road","text":""},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_crosswalk/#detail-of-requirements","title":"Detail of requirements","text":"

    Necessary requirements for creation:

    1. Create a Lanelet for the crosswalk (subtype:crosswalk).
    2. If there is a stop line before the crosswalk, create a Linestring (type:stop_line). Create stop lines for the opposing traffic lane in the same way.
    3. Create a Polygon (type:crosswalk_polygon) to cover the crosswalk.
    4. The Lanelet of the road refers to the regulatory element (subtype:crosswalk), and the regulatory element refers to the created Lanelet, Linestring, and Polygon.
    "},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_crosswalk/#supplemental-information","title":"Supplemental information","text":"
    • Link the regulatory element to the lanelet(s) of the road that intersects with the crosswalk.
    • The stop lines linked to the regulatory element do not necessarily have to exist on the road Lanelets linked with the regulatory element.
    "},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_crosswalk/#behavior-of-autoware","title":"Behavior of Autoware\uff1a","text":"

    When pedestrians or cyclists are on the crosswalk, Autoware will come to a stop before the stop line and wait for them to pass. Once they have cleared the area, Autoware will begin to move forward.

    "},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_crosswalk/#preferred-vector-map","title":"Preferred vector map","text":""},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_crosswalk/#incorrect-vector-map","title":"Incorrect vector map","text":"

    None in particular.

    "},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_crosswalk/#related-autoware-module","title":"Related Autoware module","text":"
    • Crosswalk - Autoware Universe Documentation
    "},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_crosswalk/#vm-05-02-crosswalks-with-pedestrian-signals","title":"vm-05-02 Crosswalks with pedestrian signals","text":""},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_crosswalk/#detail-of-requirements_1","title":"Detail of requirements","text":"

    Necessary requirements for creation:

    • Create a Lanelet (subtype:crosswalk, participant:pedestrian).
    • Create a Traffic Light Linestring. If multiple traffic lights exist, create multiple Linestrings.
      • Linestring
        • type:traffic_light
        • subtype:red_green
        • height:value
    • Ensure the crosswalk's Lanelet references a Regulatory Element (subtype:traffic_light). Also, ensure the Regulatory Element references Linestring (type:traffic_light).

    Refer to vm-04-02 for more about traffic light object.

    "},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_crosswalk/#preferred-vector-map_1","title":"Preferred vector map","text":""},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_crosswalk/#incorrect-vector-map_1","title":"Incorrect vector map","text":"

    None in particular.

    "},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_crosswalk/#related-autoware-module_1","title":"Related Autoware module","text":"
    • Crosswalk - Autoware Universe Documentation
    "},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_crosswalk/#vm-05-03-deceleration-for-safety-at-crosswalks","title":"vm-05-03 Deceleration for safety at crosswalks","text":""},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_crosswalk/#detail-of-requirements_2","title":"Detail of requirements","text":"

    To ensure a constant deceleration to a safe speed when traversing a crosswalk, add the following tags to the crosswalk's Lanelet (subtype:crosswalk):

    • safety_slow_down_speed [m/s]: The maximum velocity while crossing.
    • safety_slow_down_distance [m]: The starting point of the area where the maximum speed applies, measured from the vehicle's front bumper to the crosswalk.
    "},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_crosswalk/#preferred-vector-map_2","title":"Preferred vector map","text":""},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_crosswalk/#incorrect-vector-map_2","title":"Incorrect vector map","text":"

    None in particular.

    "},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_crosswalk/#related-autoware-module_2","title":"Related Autoware module","text":"
    • Crosswalk - Autoware Universe Documentation
    "},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_crosswalk/#vm-05-04-fences","title":"vm-05-04 Fences","text":""},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_crosswalk/#detail-of-requirements_3","title":"Detail of requirements","text":"

    Autoware detects pedestrians and bicycles crossing the crosswalk, as well as those that might cross. However, areas near the crosswalk, such as fenced kindergartens, playgrounds, or parks, where many people are moving, can affect crosswalk detection due to predicted paths of people and bicycles from these areas.

    Surround areas not connected to the crosswalk with Linestring (type:fence), which does not need to be linked to anything.

    However, if there is a guardrail, wall, or fence between the road and sidewalk, with another fence behind it, the second fence may be omitted. Nevertheless, areas around crosswalks are not subject to this omission and must be created without exclusion.

    "},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_crosswalk/#preferred-vector-map_3","title":"Preferred vector map","text":""},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_crosswalk/#incorrect-vector-map_3","title":"Incorrect vector map","text":"

    None in particular.

    "},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_crosswalk/#related-autoware-module_3","title":"Related Autoware module","text":"
    • map_based_prediction - Autoware Universe Documentation
    "},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_intersection/","title":"Category intersection","text":""},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_intersection/#categoryintersection","title":"Category:Intersection","text":""},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_intersection/#vm-03-01-intersection-criteria","title":"vm-03-01 Intersection criteria","text":""},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_intersection/#detail-of-requirements","title":"Detail of requirements","text":"

    Essential criteria for the construction of an intersection:

    • Encircle the drivable area at the intersection with a Polygon (type:intersection_area).
    • Add turn_direction to all Lanelets in the intersection.
    • Ensure that all lanelets in the intersection are tagged:
      • key:intersection_area
      • value: Polygon's ID
    • Attach right_of_way to the necessary Lanelets.
    • Also, it is necessary to appropriately create traffic lights, crosswalks, and stop lines.

    For detailed information, refer to the respective requirements on this page.

    "},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_intersection/#autoware-modules","title":"Autoware modules","text":"
    • The requirements for turn_direction and right_of_way are related to the intersection module, which plans velocity to avoid collisions with other vehicles, taking traffic light instructions into account.
    • The requirements for intersection_area are related to the avoidance module, which plans routes that evade by veering out of lanes in the intersections.
    "},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_intersection/#preferred-vector-map","title":"Preferred vector map","text":"

    None in particular.

    "},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_intersection/#incorrect-vector-map","title":"Incorrect vector map","text":"

    None in particular.

    "},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_intersection/#related-autoware-module","title":"Related Autoware module","text":"
    • Intersection - Autoware Universe Documentation
    • Blind Spot design - Autoware Universe Documentation
    • Static Avoidance - Autoware Universe Documentation
    • Dynamic Avoidance - Autoware Universe Documentation
    "},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_intersection/#vm-03-02-lanelets-turn-direction-and-virtual-linestring","title":"vm-03-02 Lanelet's turn direction and virtual linestring","text":""},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_intersection/#detail-of-requirements_1","title":"Detail of requirements","text":"

    Add the following tag to the Lanelets in the intersection:

    • turn_direction : straight
    • turn_direction : left
    • turn_direction : right

    Also, if the left or right Linestrings of Lanelets at the intersection lack road paintings, designate these as type:virtual.

    "},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_intersection/#behavior-of-autoware","title":"Behavior of Autoware\uff1a","text":"

    Autoware will start flashing the turn signals (blinkers) 30 meters as default before turn_direction-tagged Lanelet. If you change the blinking timing, add the following tags:

    • key: turn_signal_distance
    • value: numerical value (m)

    "},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_intersection/#preferred-vector-map_1","title":"Preferred vector map","text":""},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_intersection/#incorrect-vector-map_1","title":"Incorrect vector map","text":"

    None in particular.

    "},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_intersection/#related-autoware-module_1","title":"Related Autoware module","text":"
    • Intersection - Autoware Universe Documentation
    • Blind Spot design - Autoware Universe Documentation
    • virtual_traffic_light in behavior_velocity_planner - Autoware Universe Documentation
    "},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_intersection/#vm-03-03-lanelet-width-in-the-intersection","title":"vm-03-03 Lanelet width in the intersection","text":""},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_intersection/#detail-of-requirements_2","title":"Detail of requirements\uff1a","text":"

    Lanelets in the intersection should have a consistent width. Additionally, draw Linestrings with smooth curves.

    The shape of this curve must be determined by the Vector Map creator.

    "},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_intersection/#preferred-vector-map_2","title":"Preferred vector map","text":""},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_intersection/#incorrect-vector-map_2","title":"Incorrect vector map","text":""},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_intersection/#vm-03-04-lanelet-creation-in-the-intersection","title":"vm-03-04 Lanelet creation in the intersection","text":""},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_intersection/#detail-of-requirements_3","title":"Detail of requirements","text":"

    Create all Lanelets in the intersection, including lanelets not driven by the vehicle. Additionally, link stop lines and traffic lights to the Lanelets appropriately.

    Refer also to the creation scope vm-07-01

    "},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_intersection/#behavior-of-autoware_1","title":"Behavior of Autoware","text":"

    Autoware uses lanelets to predict the movements of other vehicles and plan the vehicle's velocity accordingly. Therefore, it is necessary to create all lanelets in the intersection.

    "},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_intersection/#preferred-vector-map_3","title":"Preferred vector map","text":""},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_intersection/#incorrect-vector-map_3","title":"Incorrect vector map","text":""},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_intersection/#related-autoware-module_2","title":"Related Autoware module","text":"
    • Intersection - Autoware Universe Documentation
    "},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_intersection/#vm-03-05-lanelet-division-in-the-intersection","title":"vm-03-05 Lanelet division in the intersection","text":""},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_intersection/#detail-of-requirements_4","title":"Detail of requirements","text":"

    Create the Lanelets in the intersection as a single object without dividing them.

    "},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_intersection/#preferred-vector-map_4","title":"Preferred vector map","text":""},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_intersection/#incorrect-vector-map_4","title":"Incorrect vector map","text":""},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_intersection/#related-autoware-module_3","title":"Related Autoware module","text":"
    • Intersection - Autoware Universe Documentation
    "},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_intersection/#vm-03-06-guide-lines-in-the-intersection","title":"vm-03-06 Guide lines in the intersection","text":""},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_intersection/#detail-of-requirements_5","title":"Detail of requirements","text":"

    If there are guide lines in the intersection, draw the Lanelet following them.

    In cases where the Lanelets branches off, begin the branching at the end of the guide line. However, it is not necessary to share points or linestrings between Lanelets.

    "},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_intersection/#preferred-vector-map_5","title":"Preferred vector map","text":""},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_intersection/#incorrect-vector-map_5","title":"Incorrect vector map","text":"

    None in particular.

    "},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_intersection/#related-autoware-module_4","title":"Related Autoware module","text":"
    • Intersection - Autoware Universe Documentation
    "},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_intersection/#vm-03-07-multiple-lanelets-in-the-intersection","title":"vm-03-07 Multiple lanelets in the intersection","text":""},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_intersection/#detail-of-requirements_6","title":"Detail of requirements","text":"

    When connecting multiple lanes with Lanelets at an intersection, those Lanelets should be made adjacent to each other without crossing.

    "},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_intersection/#preferred-vector-map_6","title":"Preferred vector map","text":""},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_intersection/#incorrect-vector-map_6","title":"Incorrect vector map","text":""},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_intersection/#related-autoware-module_5","title":"Related Autoware module","text":"
    • Intersection - Autoware Universe Documentation
    "},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_intersection/#vm-03-08-intersection-area-range","title":"vm-03-08 Intersection area range","text":""},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_intersection/#detail-of-requirements_7","title":"Detail of requirements","text":"

    Encircle the intersection's drivable area with a Polygon (type:intersection_area). The boundary of this intersection's Polygon should be defined by the objects below.

    • Linestrings (subtype:road_border)
    • Straight lines at the connection points of lanelets in the intersection.\"
    "},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_intersection/#preferred-vector-map_7","title":"Preferred vector map","text":""},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_intersection/#incorrect-vector-map_7","title":"Incorrect vector map","text":"

    None in particular.

    "},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_intersection/#related-autoware-module_6","title":"Related Autoware module","text":"
    • Intersection - Autoware Universe Documentation
    • Blind Spot design - Autoware Universe Documentation
    • Static Avoidance - Autoware Universe Documentation
    • Dynamic Avoidance - Autoware Universe Documentation
    "},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_intersection/#vm-03-09-range-of-lanelet-in-the-intersection","title":"vm-03-09 Range of Lanelet in the intersection","text":""},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_intersection/#detail-of-requirements_8","title":"Detail of requirements","text":"

    Determine the start and end positions of lanelets in the intersection (henceforth the boundaries of lanelet connections) based on the stop line's position.

    • For cases with a painted stop line:
      • The stop line's linestring (type:stop_line) position must align with the lanelet's start.
      • Extend the lanelet's end to where the opposing lane's stop line would be.
    • Without a painted stop line:
      • Use a drawn linestring (type:stop_line) to establish positions as if there were a painted stop line.
    "},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_intersection/#preferred-vector-map_8","title":"Preferred vector map","text":""},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_intersection/#incorrect-vector-map_8","title":"Incorrect vector map","text":"

    None in particular.

    "},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_intersection/#related-autoware-module_7","title":"Related Autoware module","text":"
    • Intersection - Autoware Universe Documentation
    "},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_intersection/#vm-03-10-right-of-way-with-signal","title":"vm-03-10 Right of way (with signal)","text":""},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_intersection/#detail-of-requirements_9","title":"Detail of requirements","text":"

    Set the regulatory element 'right_of_way' for Lanelets that meet all of the following criteria:

    • Lanelets in the intersection with a turn_direction of right or left.
    • Lanelets that intersect with the vehicle's lanelet.
    • There are traffic lights at the intersection.

    Set to yield those lanelets in the intersection that intersect the vehicle's lanelet, and set to yield those lanelets that do not share the same signal change timing with the vehicle. Also, if the vehicle is turning left, set the opposing vehicle's right-turn lane to yield. There is no need to set yield for lanelets where the vehicle goes straight (turn_direction:straight).

    "},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_intersection/#preferred-vector-map_9","title":"Preferred vector map","text":""},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_intersection/#the-vehicle-turns-left","title":"The vehicle turns left","text":""},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_intersection/#the-vehicle-turns-right","title":"The vehicle turns right","text":""},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_intersection/#incorrect-vector-map_9","title":"Incorrect vector map","text":"

    None in particular.

    "},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_intersection/#related-autoware-module_8","title":"Related Autoware module","text":"
    • Intersection - Autoware Universe Documentation
    "},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_intersection/#vm-03-11-right-of-way-without-signal","title":"vm-03-11 Right of way (without signal)","text":""},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_intersection/#detail-of-requirements_10","title":"Detail of requirements","text":"

    Set the regulatory element 'right_of_way' for Lanelets that meet all of the following criteria:

    • Lanelets in the intersection with a turn_direction of right or left.
    • Lanelets that intersect with the vehicle's lanelet.
    • There are no traffic lights at the intersection.

    "},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_intersection/#preferred-vector-map_10","title":"Preferred vector map","text":""},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_intersection/#1-the-vehicle-on-the-priority-lane","title":"\u2460 The vehicle on the priority lane","text":""},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_intersection/#2-the-vehicle-on-the-non-priority-lane","title":"\u2461 The vehicle on the non-priority lane","text":"

    A regulatory element is not necessary. However, when the vehicle goes straight, it has relative priority over other vehicles turning right from the opposing non-priority road. Therefore, settings for right_of_way and yield are required in this case.

    "},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_intersection/#incorrect-vector-map_10","title":"Incorrect vector map","text":"

    None in particular.

    "},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_intersection/#related-autoware-module_9","title":"Related Autoware module","text":"
    • Intersection - Autoware Universe Documentation
    "},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_intersection/#vm-03-12-right-of-way-supplements","title":"vm-03-12 Right of way supplements","text":""},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_intersection/#detail-of-requirements_11","title":"Detail of requirements","text":""},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_intersection/#why-its-necessary-to-configure-right_of_way","title":"Why it's necessary to configure 'right_of_way'","text":"

    Without the 'right_of_way' setting, Autoware interprets other lanes intersecting its path as having priority. Therefore, as long as there are other vehicles in the crossing lane, Autoware cannot enter the intersection regardless of signal indications.

    An example of a problem: Even when our signal allows proceeding, our vehicle waits beforehand if other vehicles are waiting at a red light where the opposing lane intersects with a right-turn lane.

    "},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_intersection/#preferred-vector-map_11","title":"Preferred vector map","text":"

    None in particular.

    "},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_intersection/#incorrect-vector-map_11","title":"Incorrect vector map","text":"

    None in particular.

    "},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_intersection/#related-autoware-module_10","title":"Related Autoware module","text":"
    • Intersection - Autoware Universe Documentation
    "},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_intersection/#vm-03-13-merging-from-private-area-sidewalk","title":"vm-03-13 Merging from private area, sidewalk","text":""},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_intersection/#detail-of-requirements_12","title":"Detail of requirements","text":"

    Set location=private for Lanelets within private property.

    When a road, which enters or exits private property, intersects with a sidewalk, create a Lanelet for that sidewalk (subtype:walkway).

    "},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_intersection/#behavior-of-autoware_2","title":"Behavior of Autoware\uff1a","text":"
    • The vehicle stops temporarily before entering the sidewalk.
    • The vehicle comes to a stop before merging onto the public road.
    "},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_intersection/#preferred-vector-map_12","title":"Preferred vector map","text":""},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_intersection/#incorrect-vector-map_12","title":"Incorrect vector map","text":"

    None in particular.

    "},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_intersection/#related-autoware-module_11","title":"Related Autoware module","text":"
    • Intersection - Autoware Universe Documentation
    "},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_intersection/#vm-03-14-road-marking","title":"vm-03-14 Road marking","text":""},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_intersection/#detail-of-requirements_13","title":"Detail of requirements","text":"

    If there is a stop line ahead of the guide lines in the intersection, ensure the following:

    • Create a Lanelet for the guide lines.
    • The Lanelet for the guide lines references a Regulatory Element (subtype:road_marking).
    • The Regulatory Element refers to the stop_line's Linestring.\"

    Refer to Web.Auto Documentation - Creation of Regulatory Element for the method of creation in Vector Map Builder.

    "},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_intersection/#preferred-vector-map_13","title":"Preferred vector map","text":""},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_intersection/#incorrect-vector-map_13","title":"Incorrect vector map","text":"

    None in particular.

    "},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_intersection/#related-autoware-module_12","title":"Related Autoware module","text":"
    • Intersection - Autoware Universe Documentation
    "},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_intersection/#vm-03-15-exclusive-bicycle-lane","title":"vm-03-15 Exclusive bicycle lane","text":""},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_intersection/#detail-of-requirements_14","title":"Detail of requirements","text":"

    If an exclusive bicycle lane exists, create a Lanelet (subtype:road). The section adjoining the road should share a Linestring. For bicycle lanes at intersections, assign a yieldlane designation beneath the _right_of_way for lanes that intersect with the vehicle's left-turn lane. (Refer to vm-03-10 and vm-03-11 for right_of_way).

    In addition, set lane_change = no as OptionalTags.

    "},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_intersection/#behavior-of-autoware_3","title":"Behavior of Autoware\uff1a","text":"

    The blind spot (entanglement check) feature verifies the lanelet(subtype:road) and decides if the vehicle can proceed.

    "},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_intersection/#preferred-vector-map_14","title":"Preferred vector map","text":""},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_intersection/#incorrect-vector-map_14","title":"Incorrect vector map","text":"

    None in particular.

    "},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_intersection/#related-autoware-module_13","title":"Related Autoware module","text":"
    • Blind Spot design - Autoware Universe Documentation
    "},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_lane/","title":"Category lane","text":""},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_lane/#categorylane","title":"Category:Lane","text":""},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_lane/#vm-01-01-lanelet-basics","title":"vm-01-01 Lanelet basics","text":""},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_lane/#detail-of-requirements","title":"Detail of requirements","text":"

    The road's Lanelets must comply with the following requirements.

    • subtype:road
    • location:urban, for public roads
    • Align the Lanelet's direction with the direction of vehicle move. (You can visualize lanelet direction as arrows with Vector Map Builder)
    • Set lane change is allowed or not, according to vm-01-02.
    • Set the Linestring IDs for Lanelet's left_bound and right_bound respectively. See vm-01-03.
    • tag : one_way=yes. Autoware currently does not support no.
    • Connect the Lanelet to another Lanelet, except if it's at the start or end.
    • Position the points (x, y, z) within the Lanelet to align with the PCD Map, ensuring accuracy not only laterally but also in elevation. The height of a Point should be based on the ellipsoidal height (WGS84). Refer to vm-07-04.
    "},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_lane/#preferred-vector-map","title":"Preferred vector map","text":""},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_lane/#vm-01-02-allowance-for-lane-changes","title":"vm-01-02 Allowance for lane changes","text":""},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_lane/#detail-of-requirements_1","title":"Detail of requirements","text":"

    Add a tag to the Lanelet's Linestring indicating lane change permission or prohibition.

    • Permit lane_change=yes
    • Prohibit lane_change=no

    Set the Linestring subtype according to the type of line.

    • solid
    • dashed
    "},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_lane/#referenced-from-japans-road-traffic-law","title":"Referenced from Japan's Road Traffic Law","text":"
    • White dashed lines : indicate that lane changes and overtaking are permitted.
    • White solid lines : indicate that changing lanes and overtaking are allowed.
    • Yellow solid lines : mean no lane changes are allowed.
    "},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_lane/#related-autoware-module","title":"Related Autoware module","text":"
    • Lane Change design - Autoware Universe Documentation
    • Static Avoidance - Autoware Universe Documentation
    • Dynamic Avoidance - Autoware Universe Documentation
    • Out of lane design - Autoware Universe Documentation
    "},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_lane/#vm-01-03-linestring-sharing","title":"vm-01-03 Linestring sharing","text":""},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_lane/#detail-of-requirements_2","title":"Detail of requirements","text":"

    Share the Linestring when creating Lanelets that are physically adjacent to others.

    "},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_lane/#behavior-of-autoware","title":"Behavior of Autoware","text":"

    If the Lanelet adjacent to the one the vehicle is driving on shares a Linestring, the following behaviors become possible:

    • The vehicle moves out of their lanes to avoid obstacles.
    • The vehicle turns a curve while slightly extending out of the lane.
    • Lane changes

    "},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_lane/#preferred-vector-map_1","title":"Preferred vector map","text":""},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_lane/#incorrect-vector-map","title":"Incorrect vector map","text":""},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_lane/#related-autoware-module_1","title":"Related Autoware module","text":"
    • Lane Change design - Autoware Universe Documentation
    • Static Avoidance - Autoware Universe Documentation
    • Dynamic Avoidance - Autoware Universe Documentation
    • Out of lane design - Autoware Universe Documentation
    "},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_lane/#vm-01-04-sharing-of-the-centerline-of-lanes-for-opposing-traffic","title":"vm-01-04 Sharing of the centerline of lanes for opposing traffic","text":""},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_lane/#detail-of-requirements_3","title":"Detail of requirements","text":"

    When the vehicle's lanelet and the opposing lanelet physically touch, the road center line's Linestring ID must be shared between these two Lanelets. For that purpose, the lengths of those two Lanelets must match.

    "},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_lane/#behavior-of-autoware_1","title":"Behavior of Autoware\uff1a","text":"

    Obstacle avoidance across the opposing lane is possible.

    "},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_lane/#preferred-vector-map_2","title":"Preferred vector map","text":""},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_lane/#incorrect-vector-map_1","title":"Incorrect vector map","text":""},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_lane/#vm-01-05-lane-geometry","title":"vm-01-05 Lane geometry","text":""},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_lane/#detail-of-requirements_4","title":"Detail of requirements","text":"

    The geometry of the road lanelet needs to comply with the following:

    • The left and right Linestrings must follow the road's boundary lines.
    • The lines of a Lanelet, which join with lanelets ahead and behind it, must form straight lines.
    • Ensure the outline is smooth and not jagged or bumpy, except for L-shaped cranks.

    "},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_lane/#preferred-vector-map_3","title":"Preferred vector map","text":""},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_lane/#incorrect-vector-map_2","title":"Incorrect vector map","text":""},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_lane/#vm-01-06-line-position-1","title":"vm-01-06 Line position (1)","text":""},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_lane/#detail-of-requirements_5","title":"Detail of requirements","text":"

    Ensure the road's center line Linestring is located in the exact middle of the road markings.

    "},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_lane/#preferred-vector-map_4","title":"Preferred vector map","text":""},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_lane/#incorrect-vector-map_3","title":"Incorrect vector map","text":""},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_lane/#vm-01-07-line-position-2","title":"vm-01-07 Line position (2)","text":""},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_lane/#detail-of-requirements_6","title":"Detail of requirements","text":"

    Place the Linestring at the center of the markings when lines exist outside the road.

    "},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_lane/#preferred-vector-map_5","title":"Preferred vector map","text":""},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_lane/#incorrect-vector-map_4","title":"Incorrect vector map","text":"

    None in particular.

    "},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_lane/#vm-01-08-line-position-3","title":"vm-01-08 Line position (3)","text":""},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_lane/#detail-of-requirements_7","title":"Detail of requirements","text":"

    If there are no lines on the outer side within the road, position the Linestring 0.5 m from the road's edge.

    "},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_lane/#caution","title":"Caution","text":"

    The width depends on the laws of your country.

    "},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_lane/#preferred-vector-map_6","title":"Preferred vector map","text":""},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_lane/#incorrect-vector-map_5","title":"Incorrect vector map","text":"

    None in particular.

    "},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_lane/#vm-01-09-speed-limits","title":"vm-01-09 Speed limits","text":""},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_lane/#detail-of-requirements_8","title":"Detail of requirements","text":"

    In the following cases, add a speed limit (tag:speed_limit) to the Lanelet (subtype:road) the vehicle is driving on, in km/h.

    • A speed limit road sign exists.
    • You can add a speed limit, for example, on narrow roads.

    Note that the following is achieved through Autoware's settings and behavior.

    • Vehicle's maximum velocity
    • Speed adjustment at places requiring deceleration, like curves and downhill areas.

    "},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_lane/#preferred-vector-map_7","title":"Preferred vector map","text":""},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_lane/#incorrect-vector-map_6","title":"Incorrect vector map","text":"

    None in particular.

    "},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_lane/#vm-01-10-centerline","title":"vm-01-10 Centerline","text":""},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_lane/#detail-of-requirements_9","title":"Detail of requirements","text":"

    Autoware is designed to move through the midpoint calculated from a Lanelet's left and right Linestrings.

    Create a centerline for the Lanelet when there is a need to shift the driving position to the left or right due to certain circumstances, ensuring the centerline has a smooth shape for drivability.

    "},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_lane/#caution_1","title":"Caution","text":"

    'Centerline' is a distinct concept from the central lane division line (centerline).

    "},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_lane/#preferred-vector-map_8","title":"Preferred vector map","text":""},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_lane/#incorrect-vector-map_7","title":"Incorrect vector map","text":""},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_lane/#vm-01-11-centerline-connection-1","title":"vm-01-11 Centerline connection (1)","text":""},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_lane/#detail-of-requirements_10","title":"Detail of requirements","text":"

    When center lines have been added to several Lanelets, they should be connected.

    "},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_lane/#preferred-vector-map_9","title":"Preferred vector map","text":""},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_lane/#incorrect-vector-map_8","title":"Incorrect vector map","text":""},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_lane/#vm-01-12-centerline-connection-2","title":"vm-01-12 Centerline connection (2)","text":""},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_lane/#detail-of-requirements_11","title":"Detail of requirements","text":"

    If a Lanelet with an added centerline is connected to Lanelets without one, ensure the start and end points of the added centerline are positioned at the Lanelet's center. Ensure the centerline has a smooth shape for drivability.

    "},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_lane/#preferred-vector-map_10","title":"Preferred vector map","text":""},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_lane/#incorrect-vector-map_9","title":"Incorrect vector map","text":""},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_lane/#vm-01-13-roads-with-no-centerline-1","title":"vm-01-13 Roads with no centerline (1)","text":""},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_lane/#detail-of-requirements_12","title":"Detail of requirements","text":"

    When a road lacks a central line but is wide enough for one's vehicle and oncoming vehicles to pass each other, Lanelets should be positioned next to each other at the center of the road.

    "},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_lane/#preferred-vector-map_11","title":"Preferred vector map","text":""},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_lane/#incorrect-vector-map_10","title":"Incorrect vector map","text":"

    None in particular.

    "},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_lane/#vm-01-14-roads-with-no-centerline-2","title":"vm-01-14 Roads with no centerline (2)","text":""},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_lane/#detail-of-requirements_13","title":"Detail of requirements","text":"

    Apply if all the next conditions are satisfied:

    • The road is a single lane without a central line and is too narrow for one's vehicle and an oncoming vehicle to pass each other.
    • It is an environment where no vehicles other than the autonomous vehicle enter this road.
    • The plan involves autonomous vehicles operating forth and back on this road.

    Requirement for Vector Map creation:

    • Stack two Lanelets together.
    "},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_lane/#supplementary-information","title":"Supplementary information","text":"
    • The application of this case depends on local operational policies and vehicle specifications, and should be determined in discussion with the map requestor.
    • The current Autoware does not possess the capability to pass oncoming vehicles in shared lanes.
    "},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_lane/#preferred-vector-map_12","title":"Preferred vector map","text":""},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_lane/#incorrect-vector-map_11","title":"Incorrect vector map","text":""},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_lane/#vm-01-15-road-shoulder","title":"vm-01-15 Road Shoulder","text":""},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_lane/#detail-of-requirements_14","title":"Detail of requirements","text":"

    If there is a shoulder next to the road, place the lanelet for the road shoulder (subtype:road_shoulder). However, it is not necessary to create this within intersections.

    The road shoulder's Lanelet and sidewalk's Lanelet share the Linestring (subtype:road_border).

    There must not be a road shoulder Lanelet next to another road shoulder Lanelet.

    A road Lanelet must be next to the shoulder Lanelet.

    "},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_lane/#behavior-of-autoware_2","title":"Behavior of Autoware","text":"
    • Autoware can start from the shoulder and also reach the shoulder.
    • The margin for moving to the edge upon arrival is determined by the Autoware parameter margin_from_boundary. It does not need to be considered when creating the Vector Map.
    • Autoware does not park on the road shoulder lanelet if it overlaps with any of the following:
      • A Polygon marked as no_parking_area
      • A Polygon marked as no_stopping_area
      • Areas near intersection and in the intersection
      • Crosswalk

    tag:lane_change=yes is not required on the Linestring marking the boundary of the shoulder.

    "},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_lane/#preferred-vector-map_13","title":"Preferred vector map","text":""},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_lane/#incorrect-vector-map_12","title":"Incorrect vector map","text":"

    Do not create a road shoulder Lanelet for roads without a shoulder.

    "},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_lane/#related-autoware-module_2","title":"Related Autoware module","text":"
    • Static Avoidance - Autoware Universe Documentation
    • Dynamic Avoidance - Autoware Universe Documentation
    • Goal Planner design - Autoware Universe Documentation
    "},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_lane/#vm-01-16-road-shoulder-linestring-sharing","title":"vm-01-16 Road shoulder Linestring sharing","text":""},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_lane/#detail-of-requirements_15","title":"Detail of requirements","text":"

    The Lanelets for the road shoulder and the adjacent road should have a common Linestring.

    "},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_lane/#preferred-vector-map_14","title":"Preferred vector map","text":""},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_lane/#incorrect-vector-map_13","title":"Incorrect vector map","text":"

    None in particular.

    "},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_lane/#related-autoware-module_3","title":"Related Autoware module","text":"
    • Static Avoidance - Autoware Universe Documentation
    • Dynamic Avoidance - Autoware Universe Documentation
    • Goal Planner design - Autoware Universe Documentation
    "},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_lane/#vm-01-17-side-strip","title":"vm-01-17 Side strip","text":""},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_lane/#detail-of-requirements_16","title":"Detail of requirements","text":"

    Place a Lanelet (subtype:pedestrian_lane) on the side strip. However, it is not necessary to create this within intersections.

    The side strip's Lanelet must have the Linestring (subtype:road_border) outside.

    "},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_lane/#preferred-vector-map_15","title":"Preferred vector map","text":""},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_lane/#incorrect-vector-map_14","title":"Incorrect vector map","text":"

    None in particular.

    "},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_lane/#vm-01-18-side-strip-linestring-sharing","title":"vm-01-18 Side strip Linestring sharing","text":""},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_lane/#detail-of-requirements_17","title":"Detail of requirements","text":"

    The Lanelet for the side strip and the adjacent road Lanelet should have a common Linestring.

    "},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_lane/#preferred-vector-map_16","title":"Preferred vector map","text":""},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_lane/#incorrect-vector-map_15","title":"Incorrect vector map","text":"

    None in particular.

    "},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_lane/#vm-01-19-sidewalk","title":"vm-01-19 sidewalk","text":""},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_lane/#detail-of-requirements_18","title":"Detail of requirements","text":"

    Place a sidewalk Lanelet (subtype:walkway) where necessary. However, install only when there is a crosswalk intersecting the vehicle's lane. Do not install if there is no intersection.

    The length of the lanelet (subtype:walkway) should be the area intersecting with your lane and additional 3 meters before and after.

    "},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_lane/#preferred-vector-map_17","title":"Preferred vector map","text":""},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_lane/#incorrect-vector-map_16","title":"Incorrect vector map","text":"

    None in particular.

    "},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_lane/#related-autoware-module_4","title":"Related Autoware module","text":"
    • Intersection - Autoware Universe Documentation
    • Walkway design- Autoware Universe Documentation
    "},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_others/","title":"Category others","text":""},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_others/#categoryothers","title":"Category:Others","text":""},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_others/#vm-07-01-vector-map-creation-range","title":"vm-07-01 Vector Map creation range","text":""},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_others/#detail-of-requirements","title":"Detail of requirements","text":"

    Create all Lanelets within the sensor range of the vehicle, even those on roads not driven by the vehicle, including Lanelets that intersect with the vehicle's Lanelet.

    However, if the following conditions are met, the range you must create lanelets is 10 meters at least.

    • The vehicle drives on the priority lane through the intersection without traffic lights.
    • The vehicle drives straight or turn left through the intersection with traffic lights

    Refer to vm-03-04 for more about intersection requirements.

    "},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_others/#behavior-of-autoware","title":"Behavior of Autoware\uff1a","text":"

    Autoware detects approaching vehicles and plans a route to avoid collisions.

    "},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_others/#caution","title":"Caution","text":"

    Check the range of sensors on your vehicle.

    "},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_others/#preferred-vector-map","title":"Preferred vector map","text":""},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_others/#incorrect-vector-map","title":"Incorrect vector map","text":""},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_others/#vm-07-02-range-of-detecting-pedestrians-who-enter-the-road","title":"vm-07-02 Range of detecting pedestrians who enter the road","text":""},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_others/#detail-of-requirements_1","title":"Detail of requirements","text":"

    Autoware's feature for detecting sudden entries from the roadside tracks pedestrians and cyclists beyond the road boundaries, decelerating to prevent collisions when emergence into the road is likely.

    Setting up a linestring of the following type instructs Autoware to disregard those positioned outside the line as not posing pop-out risks.

    • guard_rail
    • wall
    • fence
    "},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_others/#preferred-vector-map_1","title":"Preferred vector map","text":""},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_others/#incorrect-vector-map_1","title":"Incorrect vector map","text":"

    None in particular.

    "},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_others/#related-autoware-module","title":"Related Autoware module","text":"
    • map_based_prediction - Autoware Universe Documentation
    "},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_others/#vm-07-03-guardrails-guard-pipes-fences","title":"vm-07-03 Guardrails, guard pipes, fences","text":""},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_others/#detail-of-requirements_2","title":"Detail of requirements","text":"

    When creating a Linestring for guardrails or guard pipes (type: guard_rail), position it at the point where the most protruding part on the roadway side is projected vertically onto the ground.

    Follow the same position guidelines for Linestrings of fences (type:fence).

    "},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_others/#preferred-vector-map_2","title":"Preferred vector map","text":""},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_others/#incorrect-vector-map_2","title":"Incorrect vector map","text":""},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_others/#related-autoware-module_1","title":"Related Autoware module","text":"
    • Drivable Area design - Autoware Universe Documentation
    "},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_others/#vm-07-04-ellipsoidal-height","title":"vm-07-04 Ellipsoidal height","text":""},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_others/#detail-of-requirements_3","title":"Detail of requirements","text":"

    The height of a Point should be based on the ellipsoidal height (WGS84), in meters.

    "},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_others/#preferred-vector-map_3","title":"Preferred vector map","text":"

    The height of a Point is the distance from the ellipsoidal surface to the ground.

    "},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_others/#incorrect-vector-map_3","title":"Incorrect vector map","text":"

    The height of a Point is Orthometric height, the distance from the Geoid to the ground.

    "},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_stop_line/","title":"Category stop line","text":""},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_stop_line/#categorystop-line","title":"Category:Stop Line","text":""},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_stop_line/#vm-02-01-stop-line-alignment","title":"vm-02-01 Stop line alignment","text":""},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_stop_line/#detail-of-requirements","title":"Detail of requirements","text":"

    Place the Linestring (type:stop_line) for the stop line on the edge on the side before the white line.

    Refer to Web.Auto Documentation - Creation and edit of a stop point (StopPoint) for the method of creation in Vector Map Builder.

    "},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_stop_line/#preferred-vector-map","title":"Preferred vector map","text":""},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_stop_line/#incorrect-vector-map","title":"Incorrect vector map","text":""},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_stop_line/#vm-02-02-stop-sign","title":"vm-02-02 Stop sign","text":""},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_stop_line/#detail-of-requirements_1","title":"Detail of requirements","text":"

    Where there is no stop line on the road but a stop sign exists, place a Linestring as the stop line next to the sign.

    Create a reference from a Lanelet (subtype:road) to a Regulatory Element (subtype:traffic_sign), and have this Regulatory Element refer to a Linestring (type:stop_line) and a Linestring (type:traffic_sign, subtype:stop_sign).

    "},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_stop_line/#preferred-vector-map_1","title":"Preferred vector map","text":""},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_stop_line/#incorrect-vector-map_1","title":"Incorrect vector map","text":"

    None in particular.

    "},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_stop_line/#related-autoware-module","title":"Related Autoware module","text":"
    • Stop Line design - Autoware Universe Documentation
    "},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_traffic_light/","title":"Category traffic light","text":""},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_traffic_light/#categorytraffic-light","title":"Category:Traffic Light","text":""},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_traffic_light/#vm-04-01-traffic-light-basics","title":"vm-04-01 Traffic light basics","text":""},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_traffic_light/#detail-of-requirements","title":"Detail of requirements","text":"

    When creating traffic lights in a vector map, meet the following requirements:

    • Road Lanelet (subtype:road). Quantity: one.
    • Traffic Light. Multiple instances possible.
      • Traffic light Linestring (type:traffic_light).
      • Traffic light bulbs Linestring (type:light_bulbs).
      • Stop line Linestring (type:stop_line).
    • Regulatory element for traffic lights (subtype:traffic_light). Referenced by the road Lanelet and references both the traffic light (traffic_light, light_bulbs) and stop line (stop_line). Quantity: one.

    Refer to Web.Auto Documentation - Creation of a traffic light and a stop line for the method of creation in Vector Map Builder.

    Refer to vm-04-02 and vm-04-03 for the specifications of traffic light and traffic light bulb objects.

    "},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_traffic_light/#preferred-vector-map","title":"Preferred vector map","text":"

    If there is a crosswalk at the intersection, arrange for the road's Lanelet and the crosswalk's Lanelet to intersect and overlap.

    "},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_traffic_light/#related-autoware-module","title":"Related Autoware module","text":"
    • Traffic Light design - Autoware Universe Documentation
    "},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_traffic_light/#vm-04-02-traffic-light-position-and-size","title":"vm-04-02 Traffic light position and size","text":""},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_traffic_light/#detail-of-requirements_1","title":"Detail of requirements","text":"

    Create traffic lights with Linestring.

    • type:traffic_light
    • subtype:red_yellow_green (optional)

    Create the Linestring's length (from start to end points) precisely aligned with the traffic light's bottom edge. Ensure the traffic light's positional height is correctly represented in the Linestring's 3D coordinates.

    Use tag:height for the traffic light's height, e.g., for 50cm, write tag:height=0.5. Note that this height indicates the size of the traffic light, not its position.

    "},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_traffic_light/#supplemental-information","title":"Supplemental information","text":"

    Autoware currently ignores subtype red_yellow_green.

    "},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_traffic_light/#preferred-vector-map_1","title":"Preferred vector map","text":""},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_traffic_light/#incorrect-vector-map","title":"Incorrect vector map","text":"

    None in particular.

    "},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_traffic_light/#related-autoware-module_1","title":"Related Autoware module","text":"
    • Traffic Light design - Autoware Universe Documentation
    "},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_traffic_light/#vm-04-03-traffic-light-lamps","title":"vm-04-03 Traffic light lamps","text":""},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_traffic_light/#detail-of-requirements_2","title":"Detail of requirements","text":"

    To enable the system to detect the color of traffic lights, the color scheme and arrangement must be accurately created as objects. Indicate the position of the lights with Points. For colored lights, use the color tag to represent the color. For arrow lights, use the arrow tag to indicate the direction.

    • tag: color = red, yellow, green
    • tag: arrow = up, right, left, up_light, up_left

    Use the Points of the lights when creating a Linestring.

    • type: light_bulbs
    "},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_traffic_light/#preferred-vector-map_2","title":"Preferred vector map","text":"

    The order of the lights' Points can be 1\u21922\u21923\u21924 or 4\u21923\u21922\u21921, either is acceptable.

    "},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_traffic_light/#incorrect-vector-map_1","title":"Incorrect vector map","text":"

    None in particular.

    "},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_traffic_light/#related-autoware-module_2","title":"Related Autoware module","text":"
    • Traffic Light design - Autoware Universe Documentation
    "},{"location":"design/autoware-architecture/node-diagram/","title":"Node diagram","text":""},{"location":"design/autoware-architecture/node-diagram/#node-diagram","title":"Node diagram","text":"

    This page depicts the node diagram designs for Autoware Core/Universe architecture.

    "},{"location":"design/autoware-architecture/node-diagram/#autoware-core","title":"Autoware Core","text":"

    TBD.

    "},{"location":"design/autoware-architecture/node-diagram/#autoware-universe","title":"Autoware Universe","text":"

    Open in draw.io for fullscreen

    Note that the diagram is for reference. We are planning to update this diagram every release and may have old information between the releases. If you wish to check the latest node diagram use rqt_graph after launching the Autoware.

    "},{"location":"design/autoware-architecture/perception/","title":"Perception Component Design","text":""},{"location":"design/autoware-architecture/perception/#perception-component-design","title":"Perception Component Design","text":""},{"location":"design/autoware-architecture/perception/#purpose-of-this-document","title":"Purpose of this document","text":"

    This document outlines the high-level design strategies, goals and related rationales in the development of the Perception Component. Through this document, it is expected that all OSS developers will comprehend the design philosophy, goals and constraints under which the Perception Component is designed, and participate seamlessly in the development.

    "},{"location":"design/autoware-architecture/perception/#overview","title":"Overview","text":"

    The Perception Component receives inputs from Sensing, Localization, and Map components, and adds semantic information (e.g., Object Recognition, Obstacle Segmentation, Traffic Light Recognition, Occupancy Grid Map), which is then passed on to Planning Component. This component design follows the overarching philosophy of Autoware, defined as the microautonomy concept.

    "},{"location":"design/autoware-architecture/perception/#goals-and-non-goals","title":"Goals and non-goals","text":"

    The role of the Perception Component is to recognize the surrounding environment based on the data obtained through Sensing and acquire sufficient information (such as the presence of dynamic objects, stationary obstacles, blind spots, and traffic signal information) to enable autonomous driving.

    In our overall design, we emphasize the concept of microautonomy architecture. This term refers to a design approach that focuses on the proper modularization of functions, clear definition of interfaces between these modules, and as a result, high expandability of the system. Given this context, the goal of the Perception Component is set not to solve every conceivable complex use case (although we do aim to support basic ones), but rather to provide a platform that can be customized to the user's needs and can facilitate the development of additional features.

    To clarify the design concepts, the following points are listed as goals and non-goals.

    Goals:

    • To provide the basic functions so that a simple ODD can be defined.
    • To achieve a design that can provide perception functionality to every autonomous vehicle.
    • To be extensible with the third-party components.
    • To provide a platform that enables Autoware users to develop the complete functionality and capability.
    • To provide a platform that enables Autoware users to develop the autonomous driving system which always outperforms human drivers.
    • To provide a platform that enables Autoware users to develop the autonomous driving system achieving \"100% accuracy\" or \"error-free recognition\".

    Non-goals:

    • To develop the perception component architecture specialized for specific / limited ODDs.
    • To achieve the complete functionality and capability.
    • To outperform the recognition capability of human drivers.
    • To achieve \"100% accuracy\" or \"error-free recognition\".
    "},{"location":"design/autoware-architecture/perception/#high-level-architecture","title":"High-level architecture","text":"

    This diagram describes the high-level architecture of the Perception Component.

    The Perception Component consists of the following sub-components:

    • Object Recognition: Recognizes dynamic objects surrounding the ego vehicle in the current frame, objects that were not present during map creation, and predicts their future trajectories. This includes:
      • Pedestrians
      • Cars
      • Trucks/Buses
      • Bicycles
      • Motorcycles
      • Animals
      • Traffic cones
      • Road debris: Items such as cardboard, oil drums, trash cans, wood, etc., either dropped on the road or floating in the air
    • Obstacle Segmentation: Identifies point clouds originating from obstacles, including both dynamic objects and static obstacles that requires the ego vehicle either steer clear of them or come to a stop in front of the obstacles.
      • This includes:
        • All dynamic objects (as listed above)
        • Curbs/Bollards
        • Barriers
        • Trees
        • Walls/Buildings
      • This does not include:
        • Grass
        • Water splashes
        • Smoke/Vapor
        • Newspapers
        • Plastic bags
    • Occupancy Grid Map: Detects blind spots (areas where no information is available and where dynamic objects may jump out).
    • Traffic Light Recognition: Recognizes the colors of traffic lights and the directions of arrow signals.
    "},{"location":"design/autoware-architecture/perception/#component-interface","title":"Component interface","text":"

    The following describes the input/output concept between Perception Component and other components. See the Perception Component Interface page for the current implementation.

    "},{"location":"design/autoware-architecture/perception/#input-to-the-perception-component","title":"Input to the Perception Component","text":"
    • From Sensing: This input should provide real-time information about the environment.
      • Camera Image: Image data obtained from the camera.
      • Point Cloud: Point Cloud data obtained from LiDAR.
      • Radar Object: Object data obtained from radar.
    • From Localization: This input should provide real-time information about the ego vehicle.
      • Vehicle motion information: Includes the ego vehicle's position.
    • From Map: This input should provide real-time information about the static information about the environment.
      • Vector Map: Contains all static information about the environment, including lane area information.
      • Point Cloud Map: Contains static point cloud maps, which should not include information about the dynamic objects.
    • From API:
      • V2X information: The information from V2X modules. For example, the information from traffic signals.
    "},{"location":"design/autoware-architecture/perception/#output-from-the-perception-component","title":"Output from the Perception Component","text":"
    • To Planning
      • Dynamic Objects: Provides real-time information about objects that cannot be known in advance, such as pedestrians and other vehicles.
      • Obstacle Segmentation: Supplies real-time information about the location of obstacles, which is more primitive than Detected Object.
      • Occupancy Grid Map: Offers real-time information about the presence of occluded area information.
      • Traffic Light Recognition result: Provides the current state of each traffic light in real time.
    "},{"location":"design/autoware-architecture/perception/#how-to-add-new-modules-wip","title":"How to add new modules (WIP)","text":"

    As mentioned in the goal session, this perception module is designed to be extensible by third-party components. For specific instructions on how to add new modules and expand its functionality, please refer to the provided documentation or guidelines (WIP).

    "},{"location":"design/autoware-architecture/perception/#supported-functions","title":"Supported Functions","text":"Feature Description Requirements LiDAR DNN based 3D detector This module takes point clouds as input and detects objects such as vehicles, trucks, buses, pedestrians, and bicycles. - Point Clouds Camera DNN based 2D detector This module takes camera images as input and detects objects such as vehicles, trucks, buses, pedestrians, and bicycles in the two-dimensional image space. It detects objects within image coordinates and providing 3D coordinate information is not mandatory. - Camera Images LiDAR Clustering This module performs clustering of point clouds and shape estimation to achieve object detection without labels. - Point Clouds Semi-rule based detector This module detects objects using information from both images and point clouds, and it consists of two components: LiDAR Clustering and Camera DNN based 2D detector. - Output from Camera DNN based 2D detector and LiDAR Clustering Radar based 3D detector This module takes radar data as input and detects dynamic 3D objects. In detail, please see this document. - Radar data Object Merger This module integrates results from various detectors. - Detected Objects Interpolator This module stabilizes the object detection results by maintaining long-term detection results using Tracking results. - Detected Objects - Tracked Objects Tracking This module gives ID and estimate velocity to the detection results. - Detected Objects Prediction This module predicts the future paths (and their probabilities) of dynamic objects according to the shape of the map and the surrounding environment. - Tracked Objects - Vector Map Obstacle Segmentation This module identifies point clouds originating from obstacles that the ego vehicle should avoid. - Point Clouds - Point Cloud Map Occupancy Grid Map This module detects blind spots (areas where no information is available and where dynamic objects may jump out). - Point Clouds - Point Cloud Map Traffic Light Recognition This module detects the position and state of traffic signals. - Camera Images - Vector Map"},{"location":"design/autoware-architecture/perception/#reference-implementation","title":"Reference Implementation","text":"

    When Autoware is launched, the default parameters are loaded, and the Reference Implementation is started. For more details, please refer to the Reference Implementation.

    "},{"location":"design/autoware-architecture/perception/reference_implementation/","title":"Perception Component Reference Implementation Design","text":""},{"location":"design/autoware-architecture/perception/reference_implementation/#perception-component-reference-implementation-design","title":"Perception Component Reference Implementation Design","text":""},{"location":"design/autoware-architecture/perception/reference_implementation/#purpose-of-this-document","title":"Purpose of this document","text":"

    This document outlines detailed design of the reference imprementations. This allows developers and users to understand what is currently available with the Perception Component, how to utilize, expand, or add to its features.

    "},{"location":"design/autoware-architecture/perception/reference_implementation/#architecture","title":"Architecture","text":"

    This diagram describes the architecture of the reference implementation.

    The Perception component consists of the following sub-components:

    • Obstacle Segmentation: Identifies point clouds originating from obstacles(not only dynamic objects but also static obstacles that should be avoided, such as stationary obstacles) that the ego vehicle should avoid. For example, construction cones are recognized using this module.
    • Occupancy Grid Map: Detects blind spots (areas where no information is available and where dynamic objects may jump out).
    • Object Recognition: Recognizes dynamic objects surrounding the ego vehicle in the current frame and predicts their future trajectories.
      • Detection: Detects the pose and velocity of dynamic objects such as vehicles and pedestrians.
        • Detector: Triggers object detection processing frame by frame.
        • Interpolator: Maintains stable object detection. Even if the output from Detector suddenly becomes unavailable, Interpolator uses the output from the Tracking module to maintain the detection results without missing any objects.
      • Tracking: Associates detected results across multiple frames.
      • Prediction: Predicts trajectories of dynamic objects.
    • Traffic Light Recognition: Recognizes the colors of traffic lights and the directions of arrow signals.
    "},{"location":"design/autoware-architecture/perception/reference_implementation/#internal-interface-in-the-perception-component","title":"Internal interface in the perception component","text":"
    • Obstacle Segmentation to Object Recognition
      • Point Cloud: A Point Cloud observed in the current frame, where the ground and outliers are removed.
    • Obstacle Segmentation to Occupancy Grid Map
      • Ground filtered Point Cloud: A Point Cloud observed in the current frame, where the ground is removed.
    • Occupancy Grid Map to Obstacle Segmentation
      • Occupancy Grid Map: This is used for filtering outlier.
    "},{"location":"design/autoware-architecture/perception/reference-implementations/radar-based-3d-detector/faraway-object-detection/","title":"Radar faraway dynamic objects detection with radar objects","text":""},{"location":"design/autoware-architecture/perception/reference-implementations/radar-based-3d-detector/faraway-object-detection/#radar-faraway-dynamic-objects-detection-with-radar-objects","title":"Radar faraway dynamic objects detection with radar objects","text":""},{"location":"design/autoware-architecture/perception/reference-implementations/radar-based-3d-detector/faraway-object-detection/#overview","title":"Overview","text":"

    This diagram describes the pipeline for radar faraway dynamic object detection.

    "},{"location":"design/autoware-architecture/perception/reference-implementations/radar-based-3d-detector/faraway-object-detection/#reference-implementation","title":"Reference implementation","text":""},{"location":"design/autoware-architecture/perception/reference-implementations/radar-based-3d-detector/faraway-object-detection/#crossing-filter","title":"Crossing filter","text":"
    • radar_crossing_objects_noise_filter

    This package can filter the noise objects crossing to the ego vehicle, which are most likely ghost objects.

    "},{"location":"design/autoware-architecture/perception/reference-implementations/radar-based-3d-detector/faraway-object-detection/#velocity-filter","title":"Velocity filter","text":"
    • object_velocity_splitter

    Static objects include many noise like the objects reflected from ground. In many cases for radars, dynamic objects can be detected stably. To filter out static objects, object_velocity_splitter can be used.

    "},{"location":"design/autoware-architecture/perception/reference-implementations/radar-based-3d-detector/faraway-object-detection/#range-filter","title":"Range filter","text":"
    • object_range_splitter

    For some radars, ghost objects sometimes occur for near objects. To filter these objects, object_range_splitter can be used.

    "},{"location":"design/autoware-architecture/perception/reference-implementations/radar-based-3d-detector/faraway-object-detection/#vector-map-filter","title":"Vector map filter","text":"
    • object-lanelet-filter

    In most cases, vehicles drive in drivable are. To filter objects that are out of drivable area, object-lanelet-filter can be used. object-lanelet-filter filter objects that are out of drivable area defined by vector map.

    Note that if you use object-lanelet-filter for radar faraway detection, you need to define drivable area in a vector map other than the area where autonomous car run.

    "},{"location":"design/autoware-architecture/perception/reference-implementations/radar-based-3d-detector/faraway-object-detection/#radar-object-clustering","title":"Radar object clustering","text":"
    • radar_object_clustering

    This package can combine multiple radar detections from one object into one and adjust class and size. It can suppress splitting objects in tracking module.

    "},{"location":"design/autoware-architecture/perception/reference-implementations/radar-based-3d-detector/faraway-object-detection/#note","title":"Note","text":""},{"location":"design/autoware-architecture/perception/reference-implementations/radar-based-3d-detector/faraway-object-detection/#parameter-tuning","title":"Parameter tuning","text":"

    Detection performed only by Radar applies various strong noise processing. Therefore, there is a trade-off that if you strengthen the noise processing, things that you originally wanted to detect will disappear, and if you weaken it, your self-driving system will be unable to start because the object will be in front of you all the time due to noise. It is necessary to adjust parameters while paying attention to this trade-off.

    "},{"location":"design/autoware-architecture/perception/reference-implementations/radar-based-3d-detector/faraway-object-detection/#limitation","title":"Limitation","text":"
    • Elevated railway, vehicles for multi-level intersection

    If you use 2D radars (The radar can detect in xy-axis 2D coordinate, but can not have z-axis detection) and driving area has elevated railway or vehicles for multi-level intersection, the radar process detects these these and these have a bad influence to planning results. In addition, for now, elevated railway is detected as vehicle because the radar process doesn't have label classification feature and it leads to unintended behavior.

    "},{"location":"design/autoware-architecture/perception/reference-implementations/radar-based-3d-detector/radar-based-3d-detector/","title":"Radar based 3D detector","text":""},{"location":"design/autoware-architecture/perception/reference-implementations/radar-based-3d-detector/radar-based-3d-detector/#radar-based-3d-detector","title":"Radar based 3D detector","text":""},{"location":"design/autoware-architecture/perception/reference-implementations/radar-based-3d-detector/radar-based-3d-detector/#overview","title":"Overview","text":""},{"location":"design/autoware-architecture/perception/reference-implementations/radar-based-3d-detector/radar-based-3d-detector/#features","title":"Features","text":"

    Radar based 3D detector aims for the following:

    • Detecting objects farther than the range of LiDAR-based 3D detection.

    Since radar can acquire data from a longer distance than LiDAR (> 100m), when the distance of LiDAR-based 3D detection is insufficient, the radar base 3D detector can be applied. The detection distance of radar based 3D detection depends on the radar device specification.

    • Improving velocity estimation for dynamic objects

    Radar can get velocity information and estimate more precise twist information by fused between the objects from LiDAR-based 3D detection radar information. This can lead to improve for the performance of object tracking/prediction and planning like adaptive cruise control.

    "},{"location":"design/autoware-architecture/perception/reference-implementations/radar-based-3d-detector/radar-based-3d-detector/#whole-pipeline","title":"Whole pipeline","text":"

    Radar based 3D detector with radar objects consists of

    • 3D object detection with Radar pointcloud
    • Noise filter
    • Faraway dynamic 3D object detection
    • Radar fusion to LiDAR-based 3D object detection
    • Radar object tracking
    • Merger of tracked object

    "},{"location":"design/autoware-architecture/perception/reference-implementations/radar-based-3d-detector/radar-based-3d-detector/#interface","title":"Interface","text":"
    • Input
      • Message type for pointcloud is ros-perception/radar_msgs/msg/RadarScan.msg
      • Message type for radar objects is autoware_auto_perception_msgs/msg/DetectedObject.
        • Input objects need to be concatenated.
        • Input objects need to be compensated with ego motion.
        • Input objects need to be transformed to base_link.
    • Output
      • Tracked objects
    "},{"location":"design/autoware-architecture/perception/reference-implementations/radar-based-3d-detector/radar-based-3d-detector/#module","title":"Module","text":""},{"location":"design/autoware-architecture/perception/reference-implementations/radar-based-3d-detector/radar-based-3d-detector/#radar-pointcloud-3d-detection","title":"Radar pointcloud 3D detection","text":"

    Warning

    Under Construction

    "},{"location":"design/autoware-architecture/perception/reference-implementations/radar-based-3d-detector/radar-based-3d-detector/#noise-filter-and-radar-faraway-dynamic-3d-object-detection","title":"Noise filter and radar faraway dynamic 3D object detection","text":"

    This function filters noise objects and detects faraway (> 100m) dynamic vehicles. The main idea is that in the case where LiDAR is used, near range can be detected accurately using LiDAR pointcloud and the main role of radar is to detect distant objects that cannot be detected with LiDAR alone. In detail, please see this document

    "},{"location":"design/autoware-architecture/perception/reference-implementations/radar-based-3d-detector/radar-based-3d-detector/#radar-fusion-to-lidar-based-3d-object-detection","title":"Radar fusion to LiDAR-based 3D object detection","text":"
    • radar_fusion_to_detected_object

    This package contains a sensor fusion module for radar-detected objects and 3D detected objects. The fusion node can:

    • Attach velocity to 3D detections when successfully matching radar data. The tracking modules use the velocity information to enhance the tracking results while planning modules use it to execute actions like adaptive cruise control.
    • Improve the low confidence 3D detections when corresponding radar detections are found.
    "},{"location":"design/autoware-architecture/perception/reference-implementations/radar-based-3d-detector/radar-based-3d-detector/#radar-object-tracking","title":"Radar object tracking","text":"

    Warning

    Under Construction

    "},{"location":"design/autoware-architecture/perception/reference-implementations/radar-based-3d-detector/radar-based-3d-detector/#merger-of-tracked-object","title":"Merger of tracked object","text":"

    Warning

    Under Construction

    "},{"location":"design/autoware-architecture/perception/reference-implementations/radar-based-3d-detector/radar-based-3d-detector/#appendix","title":"Appendix","text":""},{"location":"design/autoware-architecture/perception/reference-implementations/radar-based-3d-detector/radar-based-3d-detector/#customize-own-radar-interface","title":"Customize own radar interface","text":"

    The perception interface of Autoware is defined to DetectedObjects, TrackedObjects, and PredictedObjects, however, other message is defined by own cases. For example, DetectedObjectWithFeature is used by customized message in perception module.

    Same as that, you can adjust new radar interface. For example, RadarTrack doesn't have orientation information from past discussions, especially this discussion. If you want orientation information, you can adapt radar ROS driver to publish directly to TrackedObject.

    "},{"location":"design/autoware-architecture/planning/","title":"Planning component design","text":""},{"location":"design/autoware-architecture/planning/#planning-component-design","title":"Planning component design","text":""},{"location":"design/autoware-architecture/planning/#purpose","title":"Purpose","text":"

    The Planning Component in autonomous driving systems plays a crucial role in generating a target trajectory (path and speed) for autonomous vehicles. It ensures safety and adherence to traffic rules, fulfilling specific missions.

    This document outlines the planning requirements and design within Autoware, aiding developers in comprehending the design and extendibility of the Planning Component.

    The document is divided into two parts: the first part discusses high-level requirements and design, and the latter part focuses on actual implementations and functionalities provided.

    "},{"location":"design/autoware-architecture/planning/#goals-and-non-goals","title":"Goals and non-goals","text":"

    Our objective extends beyond merely developing an autonomous driving system. We aim to offer an \"autonomous driving platform\" where users can enhance autonomous driving functionalities based on their individual needs.

    In Autoware, we utilize the microautonomy architecture concept, which emphasizes high extensibility, functional modularity, and clearly defined interfaces.

    With this in mind, the design policy for the Planning Component is focused not on addressing every complex autonomous driving scenario (as that is a very challenging problem), but on providing a customizable and easily extendable Planning development platform. We believe this approach will allow the platform to meet a wide range of needs, ultimately solving many complex use cases.

    To clarify this policy, the Goals and Non-Goals are defined as follows:

    Goals:

    • The basic functions are provided so that a simple ODD can be defined
      • Before extending its functionalities, the Planning Component must provide the essential features necessary for autonomous driving. This encompasses basic operations like moving, stopping, and turning, as well as handling lane changes and obstacle avoidance in relatively safe and simple contexts.
    • The functionality is modularized for user-driven extension
      • The system is designed to adapt to various Operational Design Domains (ODDs) with extended functionalities. Modularization, akin to plug-ins, allows for creating systems tailored to diverse needs, such as different levels of autonomous driving and varied vehicular or environmental applications (e.g., Lv4/Lv2 autonomous driving, public/private road driving, large vehicles, small robots).
      • Reducing functionalities for specific ODDs, like obstacle-free private roads, is also a key aspect. This modular approach allows for reductions in power consumption or sensor requirements, aligning with specific user needs.
    • The capability is extensible with the decision of human operators
      • Incorporating operator assistance is a critical aspect of functional expansion. It means that the system can adapt to complex and challenging scenarios with human support. The specific type of operator is not defined here. It might be a person accompanying in the vehicle during the prototype development phase or a remote operator connected in emergencies during autonomous driving services.

    Non-goals:

    The Planning Component is designed to be extended with third-party modules. Consequently, the following are not the goals of Autoware's Planning Component:

    • To provide all user-required functionalities by default.
    • To provide complete functionality and performance characteristic of an autonomous driving system.
    • To provide performance that consistently surpasses human capabilities or ensures absolute safety.

    These aspects are specific to our vision of an autonomous driving \"platform\" and may not apply to a typical autonomous driving Planning Component.

    "},{"location":"design/autoware-architecture/planning/#high-level-design","title":"High level design","text":"

    This diagram illustrates the high-level architecture of the Planning Component. This represents an idealized design, and current implementations might vary. Further details on implementations are provided in the latter sections of this document.

    Following the principles of microautonomy architecture, we have adopted a modular system framework. The functions within the Planning domain are implemented as modules, dynamically or statically adaptable according to specific use cases. This includes modules for lane changes, intersection handling, and pedestrian crossings, among others.

    The Planning Component comprises several sub-components:

    • Mission Planning: This module calculates routes from the current location to the destination, utilizing map data. Its functionality is similar to that of Fleet Management Systems (FMS) or car navigation route planning.
    • Planning Modules: These modules plans the vehicle's behavior for the assigned mission, including target trajectory, blinker signaling, etc. They are divided into Behavior and Motion categories:
      • Behavior: Focuses on calculating safe and rule-compliant routes, managing decisions for lane changes, intersection entries, and stoppings at a stop line.
      • Motion: Works in cooperate with Behavior modules to determine the vehicle's trajectory, considering its motion and ride comfort. It includes lateral and longitudinal planning for route shaping and speed calculation.
    • Validation: Ensures the safety and appropriateness of the planned trajectories, with capabilities for emergency response. In cases where a planned trajectory is unsuitable, it triggers emergency protocols or generates alternative paths.
    "},{"location":"design/autoware-architecture/planning/#highlights","title":"Highlights","text":"

    Key aspects of this high-level design include:

    "},{"location":"design/autoware-architecture/planning/#modulation-of-each-function","title":"Modulation of each function","text":"

    Essential Planning functions, such as route generation, lane changes, and intersection management, are modularized. These modules come with standardized interfaces, enabling easy addition or modification. More details on these interfaces will be discussed in subsequent sections. You can see the details about how to enable/disable each module in the implementation documentation of Planning.

    "},{"location":"design/autoware-architecture/planning/#separation-of-mission-planning-sub-component","title":"Separation of Mission Planning sub-component","text":"

    Mission Planning serves as a substitute for functions typically found in existing services like FMS (Fleet Management System). Adherence to defined interfaces in the high-level design facilitates easy integration with third-party services.

    "},{"location":"design/autoware-architecture/planning/#separation-of-validation-sub-component","title":"Separation of Validation sub-component","text":"

    Given the extendable nature of the Planning Component, ensuring consistent safety levels across all functions is challenging. Therefore, the Validation function is managed independently from the core planning modules, maintaining a baseline of safety even for arbitrary changes of the planning modules.

    "},{"location":"design/autoware-architecture/planning/#interface-for-hmi-human-machine-interface","title":"Interface for HMI (Human Machine Interface)","text":"

    The HMI is designed for smooth cooperation with human operators. These interfaces enable coordination between the Planning Component and operators, whether in-vehicle or remote.

    "},{"location":"design/autoware-architecture/planning/#trade-offs-for-the-separation-of-planning-and-other-components","title":"Trade-offs for the separation of planning and other components","text":"

    In Autoware's overarching design, the separation of components like Planning, Perception, Localization, and Control facilitates cooperation with third-party modules. However, this separation entails trade-offs between performance and extensibility. For instance, the Perception component might process unnecessary objects due to its separation from Planning. Similarly, separating planning and control can pose challenges in accounting for vehicle dynamics during planning. To mitigate these issues, we might need to enhance interface information or increase computational efforts.

    "},{"location":"design/autoware-architecture/planning/#customize-features","title":"Customize features","text":"

    A significant feature of the Planning Component design is its ability to integrate with external modules. The diagram below shows various methods for incorporating external functionalities.

    "},{"location":"design/autoware-architecture/planning/#1-adding-new-modules-to-the-planning-component","title":"1. Adding New Modules to the Planning Component","text":"

    Users can augment or replace existing Planning functionalities with new modules. This approach is commonly used for extending features, allowing for the addition of capabilities absent in the desired ODD or simplification of existing features.

    However, adding these functionalities requires well-organized module interfaces. As of November 2023, an ideal modular system is not fully established, presenting some limitations. For more information, please refer to the Reference Implementation section Customize features in the current implementation and the implementation documentation of Planning.

    "},{"location":"design/autoware-architecture/planning/#2-replacing-sub-components-of-planning","title":"2. Replacing Sub-components of Planning","text":"

    Collaboration and extension at the sub-component level may interest some users. This could involve replacing Mission Planning with an existing FMS service or incorporating a third-party trajectory generation module while utilizing the existing Validation functionality.

    Adhering to the Internal interface in the planning component, collaboration and extension at this level are feasible. While complex coordination with existing Planning features may be limited, it allows for integration between certain Planning Component functionalities and external modules.

    "},{"location":"design/autoware-architecture/planning/#3-replacing-the-entire-planning-component","title":"3. Replacing the Entire Planning Component","text":"

    Organizations or research entities developing autonomous driving Planning systems might be interested in integrating their proprietary Planning solutions with Autoware's Perception or Control modules. This can be achieved by replacing the entire Planning system, following the robust and stable interfaces defined between components. It is important to note, however, that direct coordination with existing Planning modules might not be possible.

    "},{"location":"design/autoware-architecture/planning/#component-interface","title":"Component interface","text":"

    This section describes the inputs and outputs of the Planning Component and of its internal modules. See the Planning Component Interface page for the current implementation.

    "},{"location":"design/autoware-architecture/planning/#input-to-the-planning-component","title":"Input to the planning component","text":"
    • From Map
      • Vector map: Contains all static information about the environment, including lane connection information for route planning, lane geometry for generating a reference path, and traffic rule-related information.
    • From Perception
      • Detected object information: Provides real-time information about objects that cannot be known in advance, such as pedestrians and other vehicles. The Planning Component plans maneuvers to avoid collisions with these objects.
      • Detected obstacle information: Supplies real-time information about the location of obstacles, which is more primitive than Detected Object and used for emergency stops and other safety measures.
      • Occupancy map information: Offers real-time information about the presence of pedestrians and other vehicles and occluded area information.
      • Traffic light recognition result: Provides the current state of each traffic light in real time. The Planning Component extracts relevant information for the planned path and determines whether to stop at intersections.
    • From Localization
      • Vehicle motion information: Includes the ego vehicle's position, velocity, acceleration, and other motion-related data.
    • From System
      • Operation mode: Indicates whether the vehicle is operating in Autonomous mode.
    • From Human Machine Interface (HMI)
      • Feature execution: Allows for executing/authorizing autonomous driving operations, such as lane changes or entering intersections, by human operators.
    • From API Layer
      • Destination (Goal): Represents the final position that the Planning Component aims to reach.
      • Checkpoint: Represents a midpoint along the route to the destination. This is used during route calculation.
      • Velocity limit: Sets the maximum speed limit for the vehicle.
    "},{"location":"design/autoware-architecture/planning/#output-from-the-planning-component","title":"Output from the planning component","text":"
    • To Control
      • Trajectory: Provides a smooth sequence of pose, twist, and acceleration that the Control Component must follow. The trajectory is typically 10 seconds long with a 0.1-second resolution.
      • Turn Signals: Controls the vehicle's turn indicators, such as right, left, hazard, etc. based on the planned maneuvers.
    • To System
      • Diagnostics: Reports the state of the Planning Component, indicating whether the processing is running correctly and whether a safe plan is being generated.
    • To Human Machine Interface (HMI)
      • Feature execution availability: Indicates the status of operations that can be executed or are required, such as lane changes or entering intersections.
      • Trajectory candidate: Shows the potential trajectory that will be executed after the user's execution.
    • To API Layer
      • Planning factors: Provides information about the reasoning behind the current planning behavior. This may include the position of target objects to avoid, obstacles that led to the decision to stop, and other relevant information.
    "},{"location":"design/autoware-architecture/planning/#internal-interface-in-the-planning-component","title":"Internal interface in the planning component","text":"
    • Mission Planning to Scenario Planning
      • Route: Offers guidance for the path that needs to be followed from the starting point to the destination. This path is determined based on information such as lane IDs defined on the map. At the route level, it doesn't explicitly indicate which specific lanes to take, and the route can contain multiple lanes.
    • Behavior Planning to Motion Planning
      • Path: Provides a rough position and velocity to be followed by the vehicle. These path points are usually defined with an interval of about 1 meter. Although other interval distances are possible, it may impact the precision or performance of the planning component.
      • Drivable area: Defines regions where the vehicle can drive, such as within lanes or physically drivable areas. It assumes that the motion planner will calculate the final trajectory within this defined area.
    • Scenario Planning to Validation
      • Trajectory: Defines the desired positions, velocities, and accelerations which the Control Component will try to follow. Trajectory points are defined at intervals of approximately 0.1 seconds based on the trajectory velocities.
    • Validation to Control Component
      • Trajectory: Same as above but with some additional safety considerations.
    "},{"location":"design/autoware-architecture/planning/#detailed-design","title":"Detailed design","text":""},{"location":"design/autoware-architecture/planning/#supported-features","title":"Supported features","text":"Feature Description Requirements Figure Demonstration Route Planning Plan route from the ego vehicle position to the destination. Reference implementation is in Mission Planner, enabled by launching the mission_planner node. - Lanelet map (driving lanelets) Path Planning from Route Plan path to be followed from the given route. Reference implementation is in Behavior Path Planner. - Lanelet map (driving lanelets) Obstacle Avoidance Plan path to avoid obstacles by steering operation. Reference implementation is in Static Avoidance Module, Path Optimizer. Enable flag in parameter: launch path_optimizer true - objects information Demonstration Video Path Smoothing Plan path to achieve smooth steering. Reference implementation is in Path Optimizer. - Lanelet map (driving lanelet) Demonstration Video Narrow Space Driving Plan path to drive within the drivable area. Furthermore, when it is not possible to drive within the drivable area, stop the vehicle to avoid exiting the drivable area. Reference implementation is in Path Optimizer. - Lanelet map (high-precision lane boundaries) Demonstration Video Lane Change Plan path for lane change to reach the destination. Reference implementation is in Lane Change. - Lanelet map (driving lanelets) Demonstration Video Pull Over Plan path for pull over to park at the road shoulder. Reference implementation is in Goal Planner. - Lanelet map (shoulder lane) Demonstration Videos: Simple Pull Over Arc Forward Pull Over Arc Backward Pull Over Pull Out Plan path for pull over to start from the road shoulder. Reference implementation is in Start Planner. - Lanelet map (shoulder lane) Demonstration Video: Simple Pull Out Backward Pull Out Path Shift Plan path in lateral direction in response to external instructions. Reference implementation is in Side Shift Module. - None Obstacle Stop Plan velocity to stop for an obstacle on the path. Reference implementation is in Obstacle Stop Planner, Obstacle Cruise Planner. launch obstacle_stop_planner and enable flag: TODO, launch obstacle_cruise_planner and enable flag: TODO - objects information Demonstration Video Obstacle Deceleration Plan velocity to decelerate for an obstacle located around the path. Reference implementation is in Obstacle Stop Planner, Obstacle Cruise Planner. - objects information Demonstration Video Adaptive Cruise Control Plan velocity to follow the vehicle driving in front of the ego vehicle. Reference implementation is in Obstacle Stop Planner, Obstacle Cruise Planner. - objects information Decelerate for cut-in vehicles Plan velocity to avoid a risk for cutting-in vehicle to ego lane. Reference implementation is in Obstacle Cruise Planner. - objects information Surround Check at starting Plan velocity to prevent moving when an obstacle exists around the vehicle. Reference implementation is in Surround Obstacle Checker. Enable flag in parameter: use_surround_obstacle_check true in tier4_planning_component.launch.xml < - objects information Demonstration Video Curve Deceleration Plan velocity to decelerate the speed on a curve. Reference implementation is in Motion Velocity Smoother. - None Curve Deceleration for Obstacle Plan velocity to decelerate the speed on a curve for a risk of obstacle collision around the path. Reference implementation is in Obstacle Velocity Limiter. - objects information - Lanelet map (static obstacle) Demonstration Video Crosswalk Plan velocity to stop or decelerate for pedestrians approaching or walking on a crosswalk. Reference implementation is in Crosswalk Module. - objects information - Lanelet map (pedestrian crossing) Demonstration Video Intersection Oncoming Vehicle Check Plan velocity for turning right/left at intersection to avoid a risk with oncoming other vehicles. Reference implementation is in Intersection Module. - objects information - Lanelet map (intersection lane and yield lane) Demonstration Video Intersection Blind Spot Check Plan velocity for turning right/left at intersection to avoid a risk with other vehicles or motorcycles coming from behind blind spot. Reference implementation is in Blind Spot Module. - objects information - Lanelet map (intersection lane) Demonstration Video Intersection Occlusion Check Plan velocity for turning right/left at intersection to avoid a risk with the possibility of coming vehicles from occlusion area. Reference implementation is in Intersection Module. - objects information - Lanelet map (intersection lane) Demonstration Video Intersection Traffic Jam Detection Plan velocity for intersection not to enter the intersection when a vehicle is stopped ahead for a traffic jam. Reference implementation is in Intersection Module. - objects information - Lanelet map (intersection lane) Demonstration Video Traffic Light Plan velocity for intersection according to a traffic light signal. Reference implementation is in Traffic Light Module. - Traffic light color information Demonstration Video Run-out Check Plan velocity to decelerate for the possibility of nearby objects running out into the path. Reference implementation is in Run Out Module. - objects information Demonstration Video Stop Line Plan velocity to stop at a stop line. Reference implementation is in Stop Line Module. - Lanelet map (stop line) Demonstration Video Occlusion Spot Check Plan velocity to decelerate for objects running out from occlusion area, for example, from behind a large vehicle. Reference implementation is in Occlusion Spot Module. - objects information - Lanelet map (private/public lane) Demonstration Video No Stop Area Plan velocity not to stop in areas where stopping is prohibited, such as in front of the fire station entrance. Reference implementation is in No Stopping Area Module. - Lanelet map (no stopping area) Merge from Private Area to Public Road Plan velocity for entering the public road from a private driveway to avoid a risk of collision with pedestrians or other vehicles. Reference implementation is in Merge from Private Area Module. - objects information - Lanelet map (private/public lane) WIP Speed Bump Plan velocity to decelerate for speed bumps. Reference implementation is in Speed Bump Module. - Lanelet map (speed bump) Demonstration Video Detection Area Plan velocity to stop at the corresponding stop when an object exist in the designated detection area. Reference implementation is in Detection Area Module. - Lanelet map (detection area) Demonstration Video No Drivable Lane Plan velocity to stop before exiting the area designated by ODD (Operational Design Domain) or stop the vehicle if autonomous mode started in out of ODD lane. Reference implementation is in No Drivable Lane Module. - Lanelet map (no drivable lane) Collision Detection when deviating from lane Plan velocity to avoid conflict with other vehicles driving in the another lane when the ego vehicle is deviating from own lane. Reference implementation is in Out of Lane Module. - objects information - Lanelet map (driving lane) WIP Parking Plan path and velocity for given goal in parking area. Reference implementation is in Free Space Planner. - objects information - Lanelet map (parking area) Demonstration Video Autonomous Emergency Braking (AEB) Perform an emergency stop if a collision with an object ahead is anticipated. It is noted that this function is expected as a final safety layer, and this should work even in the event of failures in the Localization or Perception system. Reference implementation is in Out of Lane Module. - Primitive objects Minimum Risk Maneuver (MRM) Provide appropriate MRM (Minimum Risk Maneuver) instructions when a hazardous event occurs. For example, when a sensor trouble found, send an instruction for emergency braking, moderate stop, or pulling over to the shoulder, depending on the severity of the situation. Reference implementation is in TODO - TODO WIP Trajectory Validation Check the planned trajectory is safe. If it is unsafe, take appropriate action, such as modify the trajectory, stop sending the trajectory or report to the autonomous driving system. Reference implementation is in Planning Validator. - None Running Lane Map Generation Generate lane map from localization data recorded in manual driving. Reference implementation is in WIP - None WIP Running Lane Optimization Optimize the centerline (reference path) of the map to make it smooth considering the vehicle kinematics. Reference implementation is in Static Centerline Optimizer. - Lanelet map (driving lanes) WIP"},{"location":"design/autoware-architecture/planning/#reference-implementation","title":"Reference Implementation","text":"

    The following diagram describes the reference implementation of the Planning component. By adding new modules or extending the functionalities, various ODDs can be supported.

    Note that some implementation does not adhere to the high-level architecture design due to the difficulties of the implementation and require updating.

    For more details, please refer to the design documents in each package.

    • mission_planner: calculate route from start to goal based on the map information.
    • behavior_path_planner: calculates path and drivable area based on the traffic rules.
      • lane_following
      • lane_change
      • static_obstacle_avoidance
      • pull_over
      • pull_out
      • side_shift
    • behavior_velocity_planner: calculates max speed based on the traffic rules.
      • detection_area
      • blind_spot
      • cross_walk
      • stop_line
      • traffic_light
      • intersection
      • no_stopping_area
      • virtual_traffic_light
      • occlusion_spot
      • run_out
    • obstacle_avoidance_planner: calculate path shape under obstacle and drivable area constraints
    • surround_obstacle_checker: keeps the vehicle being stopped when there are obstacles around the ego-vehicle. It works only when the vehicle is stopped.
    • obstacle_stop_planner: When there are obstacles on or near the trajectory, it calculates the maximum velocity of the trajectory points depending on the situation: stopping, slowing down, or adaptive cruise (following the car).
      • stop
      • slow_down
      • adaptive_cruise
    • costmap_generator: generates a costmap for path generation from dynamic objects and lane information.
    • freespace_planner: calculates trajectory considering the feasibility (e.g. curvature) for the freespace scene. Algorithms are described here.
    • scenario_selector : chooses a trajectory according to the current scenario.
    • external_velocity_limit_selector: takes an appropriate velocity limit from multiple candidates.
    • motion_velocity_smoother: calculates final velocity considering velocity, acceleration, and jerk constraints.
    "},{"location":"design/autoware-architecture/planning/#important-information-in-the-current-implementation","title":"Important information in the current implementation","text":"

    An important difference compared to the high-level design is the \"introduction of the scenario layer\" and the \"clear separation of behavior and motion.\" These are introduced due to current performance and implementation challenges. Whether to define these as part of the high-level design or to improve them as part of the implementation is a matter of discussion.

    "},{"location":"design/autoware-architecture/planning/#introducing-the-scenario-planning-layer","title":"Introducing the Scenario Planning layer","text":"

    There are different requirements for interfaces between driving in well-structured lanes and driving in a free-space area like a parking lot. For example, while Lane Driving can handle routes with map IDs, this is not appropriate for planning in free space. The mechanism that switches planning sub-components at the scenario level (Lane Driving, Parking, etc) enables a flexible design of the interface, however, it has a drawbacks of the reuse of modules across different scenarios.

    "},{"location":"design/autoware-architecture/planning/#separation-of-behavior-and-motion","title":"Separation of Behavior and Motion","text":"

    One of the classic approach to Planning involves dividing it into \"Behavior\", which decides the action, and \"Motion\", which determines the final movement. However, this separation implies a trade-off with performance, as performance tends to degrade with increasing separation of functions. For example, Behavior needs to make decisions without prior knowledge of the computations that Motion will eventually perform, which generally results in conservative decision-making. On the other hand, if behavior and motion are integrated, motion performance and decision-making become interdependent, creating challenges in terms of expandability, such as when you wish to extend only the decision-making function to follow a regional traffic rules.

    To understand this background, this previously discussed document may be useful.

    "},{"location":"design/autoware-architecture/planning/#customize-features-in-the-current-implementation","title":"Customize features in the current implementation","text":"

    While it is possible to add module-level functionalities in the current implementation, a unified interface for all functionalities is not provided. Here's a brief explanation of the methods of extending at the module level in the current implementation.

    "},{"location":"design/autoware-architecture/planning/#add-new-modules-in-behavior_velocity_planner-or-behavior_path_planner","title":"Add new modules in behavior_velocity_planner or behavior_path_planner","text":"

    ROS nodes such as behavior_path_planner and behavior_velocity_planner have a module interface available through plugins. By adding modules in accordance with the module interfaces defined in these ROS nodes, dynamic loading/unloading of modules becomes possible. For specific methods of adding modules, please refer to the documentation of each package.

    "},{"location":"design/autoware-architecture/planning/#add-a-new-ros-node-in-the-planning-component","title":"Add a new ros node in the planning component","text":"

    When adding modules in Motion Planning, it is necessary to create the module as a ROS Node and integrate it into the planning component. The current configuration involves adding information to the target trajectory calculated upstream, and the introduction of a ROS Node in this process allows for the expansion of functionality.

    "},{"location":"design/autoware-architecture/planning/#add-or-replace-with-scenarios","title":"Add or replace with scenarios","text":"

    The current implementation has introduced a scenario-level switching logic as a method for collectively switching multiple modules. This allows for the addition of new scenarios (e.g., highway driving).

    By creating a scenario as a ros node and aligning the scenario_selector ros node with it, the integration is complete. The benefit of this is that you can introduce significant new functionalities without affecting the implementation of other scenarios (like Lane Driving). However, it only allows for scenario-level coordination through scenario switching and does not enable coordination at the existing planning module level.

    "},{"location":"design/autoware-architecture/sensing/","title":"Sensing component design","text":""},{"location":"design/autoware-architecture/sensing/#sensing-component-design","title":"Sensing component design","text":""},{"location":"design/autoware-architecture/sensing/#overview","title":"Overview","text":"

    Sensing component is a collection of modules that apply some primitive pre-processing to the raw sensor data.

    The sensor input formats are defined in this component.

    "},{"location":"design/autoware-architecture/sensing/#role","title":"Role","text":"
    • Abstraction of data formats to enable usage of sensors from various vendors
    • Perform common/primitive sensor data processing required by each component
    "},{"location":"design/autoware-architecture/sensing/#high-level-architecture","title":"High-level architecture","text":"

    This diagram describes the high-level architecture of the Sensing Component.

    "},{"location":"design/autoware-architecture/sensing/#inputs","title":"Inputs","text":""},{"location":"design/autoware-architecture/sensing/#input-types","title":"Input types","text":"Sensor Data Message Type Point cloud (LiDARs, depth cameras, etc.) sensor_msgs/msg/PointCloud2.msg Image (RGB, monochrome, depth, etc. cameras) sensor_msgs/msg/Image.msg Radar scan radar_msgs/msg/RadarScan.msg Radar tracks radar_msgs/msg/RadarTracks.msg GNSS-INS position sensor_msgs/msg/NavSatFix.msg GNSS-INS orientation autoware_sensing_msgs/GnssInsOrientationStamped.msg GNSS-INS velocity geometry_msgs/msg/TwistWithCovarianceStamped.msg GNSS-INS acceleration geometry_msgs/msg/AccelWithCovarianceStamped.msg IMU sensor_msgs/msg/Imu.msg Ultrasonics sensor_msgs/msg/Range.msg"},{"location":"design/autoware-architecture/sensing/#design-by-data-types","title":"Design by data-types","text":"
    • GNSS/INS data pre-processing design
    • Image pre-processing design
    • Point cloud pre-processing design
    • Radar pointcloud data pre-processing design
    • Radar objects data pre-processing design
    • Ultrasonics data pre-processing design
    "},{"location":"design/autoware-architecture/sensing/data-types/gnss-ins-data/","title":"GNSS/INS data pre-processing design","text":""},{"location":"design/autoware-architecture/sensing/data-types/gnss-ins-data/#gnssins-data-pre-processing-design","title":"GNSS/INS data pre-processing design","text":"

    Warning

    Under Construction

    "},{"location":"design/autoware-architecture/sensing/data-types/image/","title":"Image pre-processing design","text":""},{"location":"design/autoware-architecture/sensing/data-types/image/#image-pre-processing-design","title":"Image pre-processing design","text":"

    Warning

    Under Construction

    "},{"location":"design/autoware-architecture/sensing/data-types/point-cloud/","title":"Point cloud pre-processing design","text":""},{"location":"design/autoware-architecture/sensing/data-types/point-cloud/#point-cloud-pre-processing-design","title":"Point cloud pre-processing design","text":""},{"location":"design/autoware-architecture/sensing/data-types/point-cloud/#overview","title":"Overview","text":"

    Point cloud pre-processing is a collection of modules that apply some primitive pre-processing to the raw sensor data.

    This pipeline covers the flow of data from drivers to the perception stack.

    "},{"location":"design/autoware-architecture/sensing/data-types/point-cloud/#recommended-processing-pipeline","title":"Recommended processing pipeline","text":"
    graph TD\n    Driver[\"Lidar Driver\"] -->|\"Cloud XYZIRCAEDT\"| FilterPR[\"Polygon Remover Filter / CropBox Filter\"]\n\n    subgraph \"sensing\"\n    FilterPR -->|\"Cloud XYZIRCAEDT\"| FilterDC[\"Motion Distortion Corrector Filter\"]\n    FilterDC -->|\"Cloud XYZIRCAEDT\"| FilterOF[\"Outlier Remover Filter\"]\n    FilterOF -->|\"Cloud XYZIRC\"| FilterDS[\"Downsampler Filter\"]\n    FilterDS -->|\"Cloud XYZIRC\"| FilterTrans[\"Cloud Transformer\"]\n    FilterTrans -->|\"Cloud XYZIRC\"| FilterC\n\n    FilterX[\"...\"] -->|\"Cloud XYZIRC (i)\"| FilterC[\"Cloud Concatenator\"]\n    end\n\n    FilterC -->|\"Cloud XYZIRC\"| SegGr[\"Ground Segmentation\"]
    "},{"location":"design/autoware-architecture/sensing/data-types/point-cloud/#list-of-modules","title":"List of modules","text":"

    The modules used here are from pointcloud_preprocessor package.

    For details about the modules, see the following table.

    It is recommended that these modules are used in a single container as components. For details see ROS 2 Composition

    "},{"location":"design/autoware-architecture/sensing/data-types/point-cloud/#point-cloud-fields","title":"Point cloud fields","text":"

    The lidar driver is expected to output a point cloud with the PointXYZIRCAEDT point type.

    name datatype derived description X FLOAT32 false X position Y FLOAT32 false Y position Z FLOAT32 false Z position I (intensity) UINT8 false Measured reflectivity, intensity of the point R (return type) UINT8 false Laser return type for dual return lidars C (channel) UINT16 false Channel ID of the laser that measured the point A (azimuth) FLOAT32 true atan2(Y, X), Horizontal angle from the lidar origin to the point E (elevation) FLOAT32 true atan2(Z, D), Vertical angle from the lidar origin to the point D (distance) FLOAT32 true hypot(X, Y, Z), Euclidean distance from the lidar origin to the point T (time) UINT32 false Nanoseconds passed since the time of the header when this point was measured

    Note

    A (azimuth), E (elevation), and D (distance) fields are derived fields. They are provided by the driver to reduce the computational load on some parts of the perception stack.

    Warning

    Autoware supports conversion from PointXYZI to PointXYZIRC (with channel and return type set to 0) for prototyping purposes. However, this conversion is not recommended for production use since it is not efficient.

    "},{"location":"design/autoware-architecture/sensing/data-types/point-cloud/#intensity","title":"Intensity","text":"

    We will use following ranges for intensity, compatible with the VLP16 User Manual:

    Quoting from the VLP-16 User Manual:

    For each laser measurement, a reflectivity byte is returned in addition to distance. Reflectivity byte values are segmented into two ranges, allowing software to distinguish diffuse reflectors (e.g. tree trunks, clothing) in the low range from retroreflectors (e.g. road signs, license plates) in the high range. A retroreflector reflects light back to its source with a minimum of scattering. The VLP-16 provides its own light, with negligible separation between transmitting laser and receiving detector, so retroreflecting surfaces pop with reflected IR light compared to diffuse reflectors that tend to scatter reflected energy.

    • Diffuse reflectors report values from 0 to 100 for reflectivities from 0% to 100%.
    • Retroreflectors report values from 101 to 255, where 255 represents an ideal reflection.

    In a typical point cloud without retroreflectors, all intensity points will be between 0 and 100.

    Retroreflective Gradient road sign, Image Source

    But in a point cloud with retroreflectors, the intensity points will be between 0 and 255.

    "},{"location":"design/autoware-architecture/sensing/data-types/point-cloud/#intensity-mapping-for-other-lidar-brands","title":"Intensity mapping for other lidar brands","text":""},{"location":"design/autoware-architecture/sensing/data-types/point-cloud/#hesai-pandarxt16","title":"Hesai PandarXT16","text":"

    Hesai Pandar XT16 User Manual

    This lidar has 2 modes for reporting reflectivity:

    • Linear mapping
    • Non-linear mapping

    If you are using linear mapping mode, you should map from [0, 255] to [0, 100] when constructing the point cloud.

    If you are using non-linear mapping mode, you should map (hesai to autoware)

    • [0, 251] to [0, 100] and
    • [252, 254] to [101, 255]

    when constructing the point cloud.

    "},{"location":"design/autoware-architecture/sensing/data-types/point-cloud/#livox-mid-70","title":"Livox Mid-70","text":"

    Livox Mid-70 User Manual

    This lidar has 2 modes for reporting reflectivity similar to Velodyne VLP-16, only the ranges are slightly different.

    You should map (livox to autoware)

    • [0, 150] to [0, 100] and
    • [151, 255] to [101, 255]

    when constructing the point cloud.

    "},{"location":"design/autoware-architecture/sensing/data-types/point-cloud/#robosense-rs-lidar-16","title":"RoboSense RS-LiDAR-16","text":"

    RoboSense RS-LiDAR-16 User Manual

    No mapping required, same as Velodyne VLP-16.

    "},{"location":"design/autoware-architecture/sensing/data-types/point-cloud/#ouster-os-1-64","title":"Ouster OS-1-64","text":"

    Software User Manual v2.0.0 for all Ouster sensors

    In the manual it is stated:

    Reflectivity [16 bit unsigned int] - sensor Signal Photons measurements are scaled based on measured range and sensor sensitivity at that range, providing an indication of target reflectivity. Calibration of this measurement has not currently been rigorously implemented, but this will be updated in a future firmware release.

    So it is advised to map the 16 bit reflectivity to [0, 100] range.

    "},{"location":"design/autoware-architecture/sensing/data-types/point-cloud/#leishen-ch64w","title":"Leishen CH64W","text":"

    I couldn't get the english user manual, link of website

    In a user manual I was able to find it says:

    Byte 7 represents echo strength, and the value range is 0-255. (Echo strength can reflect the energy reflection characteristics of the measured object in the actual measurement environment. Therefore, the echo strength can be used to distinguish objects with different reflection characteristics.)

    So it is advised to map the [0, 255] to [0, 100] range.

    "},{"location":"design/autoware-architecture/sensing/data-types/point-cloud/#return-type","title":"Return type","text":"

    Various lidars support multiple return modes. Velodyne lidars support Strongest and Last return modes.

    In the PointXYZIRC and PointXYZIRCAEDT types, the R field represents the return type with a UINT8. The return type is vendor-specific. The following table provides an example of return type definitions.

    R (return type) Description 0 Unknown / Not Marked 1 Strongest 2 Last"},{"location":"design/autoware-architecture/sensing/data-types/point-cloud/#channel","title":"Channel","text":"

    The channel field is used to identify the vertical channel of the laser that measured the point. In various lidar manuals or literature, it can also be called laser id, ring, laser line.

    For Velodyne VLP-16, there are 16 channels. Default order of channels in drivers are generally in firing order.

    In the PointXYZIRC and PointXYZIRCAEDT types, the C field represents the vertical channel ID with a UINT16.

    "},{"location":"design/autoware-architecture/sensing/data-types/point-cloud/#azimuth","title":"Azimuth","text":"

    The azimuth field gives the horizontal angle between the optical origin of the lidar and the point. Many lidar measure this with the angle of the rotary encoder when the laser was fired, and the driver typically corrects the value based on calibration data.

    In the PointXYZIRCAEDT type, the A field represents the azimuth angle in radians (clockwise) with a FLOAT32.

    "},{"location":"design/autoware-architecture/sensing/data-types/point-cloud/#elevation","title":"Elevation","text":"

    The elevation field gives the vertical angle between the optical origin of the lidar and the point. In the PointXYZIRCAEDT type, the E field represents the elevation angle in radians (clockwise) with a FLOAT32.

    "},{"location":"design/autoware-architecture/sensing/data-types/point-cloud/#solid-state-and-petal-pattern-lidars","title":"Solid state and petal pattern lidars","text":"

    Warning

    This section is subject to change. Following are suggestions and open for discussion.

    For solid state lidars that have lines, assign row number as the channel id.

    For petal pattern lidars, you can keep channel 0.

    "},{"location":"design/autoware-architecture/sensing/data-types/point-cloud/#time-stamp","title":"Time stamp","text":"

    In lidar point clouds, each point measurement can have its individual time stamp. This information can be used to eliminate the motion blur that is caused by the movement of the lidar during the scan.

    "},{"location":"design/autoware-architecture/sensing/data-types/point-cloud/#point-cloud-header-time","title":"Point cloud header time","text":"

    The header contains a Time field. The time field has 2 components:

    Field Type Description sec int32 Unix time (seconds elapsed since January 1, 1970) nanosec uint32 Nanoseconds elapsed since the sec field

    The header of the point cloud message is expected to have the time of the earliest point it has.

    Note

    The sec field is int32 in ROS 2 humble. The largest value it can represent is 2^31 seconds, it is subject to year 2038 problems. We will wait for actions on ROS 2 community side.

    More info at: https://github.com/ros2/rcl_interfaces/issues/85

    "},{"location":"design/autoware-architecture/sensing/data-types/point-cloud/#individual-point-time-stamp","title":"Individual point time stamp","text":"

    Each PointXYZIRCAEDT point type has the T field for representing the nanoseconds passed since the first-shot point of the point cloud.

    To calculate exact time each point was shot, the T nanoseconds are added to the header time.

    Note

    The T field is uint32 type. The largest value it can represent is 2^32 nanoseconds, which equates to roughly 4.29 seconds. Usual point clouds don't last more than 100ms for full cycle. So this field should be enough.

    "},{"location":"design/autoware-architecture/sensing/data-types/ultrasonics-data/","title":"Ultrasonics data pre-processing design","text":""},{"location":"design/autoware-architecture/sensing/data-types/ultrasonics-data/#ultrasonics-data-pre-processing-design","title":"Ultrasonics data pre-processing design","text":"

    Warning

    Under Construction

    "},{"location":"design/autoware-architecture/sensing/data-types/radar-data/radar-objects-data/","title":"Radar objects data pre-processing design","text":""},{"location":"design/autoware-architecture/sensing/data-types/radar-data/radar-objects-data/#radar-objects-data-pre-processing-design","title":"Radar objects data pre-processing design","text":""},{"location":"design/autoware-architecture/sensing/data-types/radar-data/radar-objects-data/#overview","title":"Overview","text":""},{"location":"design/autoware-architecture/sensing/data-types/radar-data/radar-objects-data/#pipeline","title":"Pipeline","text":"

    This diagram describes the pre-process pipeline for radar objects.

    "},{"location":"design/autoware-architecture/sensing/data-types/radar-data/radar-objects-data/#interface","title":"Interface","text":"
    • Input
      • Radar data from device
      • Twist information of ego vehicle motion
    • Output
      • Merged radar objects information
    "},{"location":"design/autoware-architecture/sensing/data-types/radar-data/radar-objects-data/#note","title":"Note","text":"
    • The radar pre-process package filters noise through the ros-perception/radar_msgs/msg/RadarTrack.msg message type with sensor coordinate.
    • It is recommended to change the coordinate system from each sensor to base_link with message converter.
    • If there are multiple radar objects, the object merger package concatenates these objects.
    "},{"location":"design/autoware-architecture/sensing/data-types/radar-data/radar-objects-data/#input","title":"Input","text":"

    Autoware uses radar objects data type as radar_msgs/msg/RadarTracks.msg. In detail, please see Data message for radars.

    "},{"location":"design/autoware-architecture/sensing/data-types/radar-data/radar-objects-data/#reference-implementations","title":"Reference implementations","text":""},{"location":"design/autoware-architecture/sensing/data-types/radar-data/radar-objects-data/#device-driver-for-radars","title":"Device driver for radars","text":"

    Autoware supports ros-perception/radar_msgs/msg/RadarScan.msg and autoware_auto_perception_msgs/msg/TrackedObjects.msg for Radar drivers.

    In detail, please see Device driver for radars.

    "},{"location":"design/autoware-architecture/sensing/data-types/radar-data/radar-objects-data/#noise-filter","title":"Noise filter","text":"
    • radar_tracks_noise_filter

    Radar can detect x-axis velocity as doppler velocity, but cannot detect y-axis velocity. Some radar can estimate y-axis velocity inside the device, but it sometimes lack precision. This package treats these objects as noise by y-axis threshold filter.

    "},{"location":"design/autoware-architecture/sensing/data-types/radar-data/radar-objects-data/#message-converter","title":"Message converter","text":"
    • radar_tracks_msgs_converter

    This package converts from radar_msgs/msg/RadarTracks into autoware_auto_perception_msgs/msg/DetectedObject with ego vehicle motion compensation and coordinate transform.

    "},{"location":"design/autoware-architecture/sensing/data-types/radar-data/radar-objects-data/#object-merger","title":"Object merger","text":"
    • object_merger

    This package can merge 2 topics of autoware_auto_perception_msgs/msg/DetectedObject.

    • simple_object_merger

    This package can merge simply multiple topics of autoware_auto_perception_msgs/msg/DetectedObject. Different from object_merger, this package doesn't use association algorithm and can merge with low calculation cost.

    • topic_tools

    If a vehicle has 1 radar, the topic relay tool in topic_tools can be used. Example is as below.

    <launch>\n<group>\n<push-ros-namespace namespace=\"radar\"/>\n<group>\n<push-ros-namespace namespace=\"front_center\"/>\n<include file=\"$(find-pkg-share example_launch)/launch/ars408.launch.xml\">\n<arg name=\"interface\" value=\"can0\" />\n</include>\n</group>\n<node pkg=\"topic_tools\" exec=\"relay\" name=\"radar_relay\" output=\"log\" args=\"front_center/detected_objects detected_objects\"/>\n</group>\n</launch>\n
    "},{"location":"design/autoware-architecture/sensing/data-types/radar-data/radar-objects-data/#appendix","title":"Appendix","text":""},{"location":"design/autoware-architecture/sensing/data-types/radar-data/radar-objects-data/#discussion","title":"Discussion","text":"

    Radar architecture design is discussed as below.

    • Discussion 2531
    • Discussion 2532.
    "},{"location":"design/autoware-architecture/sensing/data-types/radar-data/radar-pointcloud-data/","title":"Radar pointcloud data pre-processing design","text":""},{"location":"design/autoware-architecture/sensing/data-types/radar-data/radar-pointcloud-data/#radar-pointcloud-data-pre-processing-design","title":"Radar pointcloud data pre-processing design","text":""},{"location":"design/autoware-architecture/sensing/data-types/radar-data/radar-pointcloud-data/#overview","title":"Overview","text":""},{"location":"design/autoware-architecture/sensing/data-types/radar-data/radar-pointcloud-data/#pipeline","title":"Pipeline","text":"

    This diagram describes the pre-process pipeline for radar pointcloud.

    "},{"location":"design/autoware-architecture/sensing/data-types/radar-data/radar-pointcloud-data/#interface","title":"Interface","text":"
    • Input
      • Radar data from device
      • Twist information of ego vehicle motion
    • Output
      • Dynamic radar pointcloud (ros-perception/radar_msgs/msg/RadarScan.msg)
      • Noise filtered radar pointcloud (sensor_msgs/msg/Pointcloud2.msg)
    "},{"location":"design/autoware-architecture/sensing/data-types/radar-data/radar-pointcloud-data/#note","title":"Note","text":"
    • In the sensing layer, the radar pre-process packages filter noise through the ros-perception/radar_msgs/msg/RadarScan.msg message type with sensor coordinate.
    • For use of radar pointcloud data by LiDAR packages, we would like to propose a converter for creating sensor_msgs/msg/Pointcloud2.msg from ros-perception/radar_msgs/msg/RadarScan.msg.
    "},{"location":"design/autoware-architecture/sensing/data-types/radar-data/radar-pointcloud-data/#reference-implementations","title":"Reference implementations","text":""},{"location":"design/autoware-architecture/sensing/data-types/radar-data/radar-pointcloud-data/#data-message-for-radars","title":"Data message for radars","text":"

    Autoware uses radar objects data type as radar_msgs/msg/RadarScan.msg. In detail, please see Data message for radars.

    "},{"location":"design/autoware-architecture/sensing/data-types/radar-data/radar-pointcloud-data/#device-driver-for-radars","title":"Device driver for radars","text":"

    Autoware support ros-perception/radar_msgs/msg/RadarScan.msg and autoware_auto_perception_msgs/msg/TrackedObjects.msg for Radar drivers.

    In detail, please see Device driver for radars.

    "},{"location":"design/autoware-architecture/sensing/data-types/radar-data/radar-pointcloud-data/#basic-noise-filter","title":"Basic noise filter","text":"
    • radar_threshold_filter

    This package removes pointcloud noise with low amplitude, edge angle, too near pointcloud by threshold. The noise depends on the radar devices and installation location.

    "},{"location":"design/autoware-architecture/sensing/data-types/radar-data/radar-pointcloud-data/#filter-to-staticdynamic-pointcloud","title":"Filter to static/dynamic pointcloud","text":"
    • radar_static_pointcloud_filter

    This package extracts static/dynamic radar pointcloud by using doppler velocity and ego motion. The static radar pointcloud can be used for localization like NDT scan matching, and the dynamic radar pointcloud can be used for dynamic object detection.

    "},{"location":"design/autoware-architecture/sensing/data-types/radar-data/radar-pointcloud-data/#message-converter-from-radarscan-to-pointcloud2","title":"Message converter from RadarScan to Pointcloud2","text":"
    • radar_scan_to_pointcloud2

    For convenient use of radar pointcloud within existing LiDAR packages, we suggest a radar_scan_to_pointcloud2_convertor package for conversion from ros-perception/radar_msgs/msg/RadarScan.msg to sensor_msgs/msg/Pointcloud2.msg.

    LiDAR package Radar package message sensor_msgs/msg/Pointcloud2.msg ros-perception/radar_msgs/msg/RadarScan.msg coordinate (x, y, z) (r, \u03b8, \u03c6) value intensity amplitude, doppler velocity

    For considered use cases,

    • Use pointcloud_preprocessor for radar scan.
    • Apply obstacle segmentation like ground segmentation to radar points for LiDAR-less (camera + radar) systems.
    "},{"location":"design/autoware-architecture/sensing/data-types/radar-data/radar-pointcloud-data/#appendix","title":"Appendix","text":""},{"location":"design/autoware-architecture/sensing/data-types/radar-data/radar-pointcloud-data/#discussion","title":"Discussion","text":"

    Radar architecture design is discussed as below.

    • Discussion 2531
    • Discussion 2532.
    "},{"location":"design/autoware-architecture/sensing/data-types/radar-data/reference-implementations/data-message/","title":"Data message for radars","text":""},{"location":"design/autoware-architecture/sensing/data-types/radar-data/reference-implementations/data-message/#data-message-for-radars","title":"Data message for radars","text":""},{"location":"design/autoware-architecture/sensing/data-types/radar-data/reference-implementations/data-message/#summary","title":"Summary","text":"

    To sum up, Autoware uses radar data type as below.

    • radar_msgs/msg/RadarScan.msg for radar pointcloud
    • radar_msgs/msg/RadarTracks.msg for radar objects.
    "},{"location":"design/autoware-architecture/sensing/data-types/radar-data/reference-implementations/data-message/#radar-data-message-for-pointcloud","title":"Radar data message for pointcloud","text":""},{"location":"design/autoware-architecture/sensing/data-types/radar-data/reference-implementations/data-message/#message-definition","title":"Message definition","text":"
    • ros2/msg/RadarScan.msg
    std_msgs/Header header\nradar_msgs/RadarReturn[] returns\n
    • ros2/msg/RadarReturn.msg
    # All variables below are relative to the radar's frame of reference.\n# This message is not meant to be used alone but as part of a stamped or array message.\n\nfloat32 range                            # Distance (m) from the sensor to the detected return.\nfloat32 azimuth                          # Angle (in radians) in the azimuth plane between the sensor and the detected return.\n#    Positive angles are anticlockwise from the sensor and negative angles clockwise from the sensor as per REP-0103.\nfloat32 elevation                        # Angle (in radians) in the elevation plane between the sensor and the detected return.\n#    Negative angles are below the sensor. For 2D radar, this will be 0.\nfloat32 doppler_velocity                 # The doppler speeds (m/s) of the return.\nfloat32 amplitude                        # The amplitude of the of the return (dB)\n
    "},{"location":"design/autoware-architecture/sensing/data-types/radar-data/reference-implementations/data-message/#radar-data-message-for-tracked-objects","title":"Radar data message for tracked objects","text":""},{"location":"design/autoware-architecture/sensing/data-types/radar-data/reference-implementations/data-message/#message-definition_1","title":"Message definition","text":"
    • radar_msgs/msg/RadarTrack.msg
    # Object classifications (Additional vendor-specific classifications are permitted starting from 32000 eg. Car)\nuint16 NO_CLASSIFICATION=0\nuint16 STATIC=1\nuint16 DYNAMIC=2\n\nunique_identifier_msgs/UUID uuid            # A unique ID of the object generated by the radar.\n# Note: The z component of these fields is ignored for 2D tracking.\ngeometry_msgs/Point position                # x, y, z coordinates of the centroid of the object being tracked.\ngeometry_msgs/Vector3 velocity              # The velocity of the object in each spatial dimension.\ngeometry_msgs/Vector3 acceleration          # The acceleration of the object in each spatial dimension.\ngeometry_msgs/Vector3 size                  # The object size as represented by the radar sensor eg. length, width, height OR the diameter of an ellipsoid in the x, y, z, dimensions\n# and is from the sensor frame's view.\nuint16 classification                       # An optional classification of the object (see above)\nfloat32[6] position_covariance              # Upper-triangle covariance about the x, y, z axes\nfloat32[6] velocity_covariance              # Upper-triangle covariance about the x, y, z axes\nfloat32[6] acceleration_covariance          # Upper-triangle covariance about the x, y, z axes\nfloat32[6] size_covariance                  # Upper-triangle covariance about the x, y, z axes\n
    "},{"location":"design/autoware-architecture/sensing/data-types/radar-data/reference-implementations/data-message/#message-usage-for-radartracks","title":"Message usage for RadarTracks","text":"
    • Object classifications

    In object classifications of radar_msgs/msg/RadarTrack.msg, additional classifications label can be used by the number starting from 32000.

    To express for Autoware label definition, Autoware defines object classifications for RadarTracks.msg as below.

    uint16 UNKNOWN = 32000;\nuint16 CAR = 32001;\nuint16 TRUCK = 32002;\nuint16 BUS = 32003;\nuint16 TRAILER = 32004;\nuint16 MOTORCYCLE = 32005;\nuint16 BICYCLE = 32006;\nuint16 PEDESTRIAN = 32007;\n

    For detail implementation, please see radar_tracks_msgs_converter.

    "},{"location":"design/autoware-architecture/sensing/data-types/radar-data/reference-implementations/data-message/#note","title":"Note","text":""},{"location":"design/autoware-architecture/sensing/data-types/radar-data/reference-implementations/data-message/#survey-for-radar-message","title":"Survey for radar message","text":"

    Depending on the sensor manufacturer and its purpose, each sensor might exchange raw, post-processed data. This section introduces a survey about the previously developed messaging systems in the open-source community. Although there are many kinds of outputs, radar mainly adopt two types as outputs, pointcloud and objects. Related discussion for message definition in ros-perception are PR #1, PR #2, and PR #3. Existing open source softwares for radar are summarized in these PR.

    "},{"location":"design/autoware-architecture/sensing/data-types/radar-data/reference-implementations/device-driver/","title":"Device driver for radars","text":""},{"location":"design/autoware-architecture/sensing/data-types/radar-data/reference-implementations/device-driver/#device-driver-for-radars","title":"Device driver for radars","text":""},{"location":"design/autoware-architecture/sensing/data-types/radar-data/reference-implementations/device-driver/#interface-for-radar-devices","title":"Interface for radar devices","text":"

    The radar driver converts the communication data from radars to ROS 2 topics. The following communication types are considered:

    • CAN
    • CAN-FD
    • Ethernet

    "},{"location":"design/autoware-architecture/sensing/data-types/radar-data/reference-implementations/device-driver/#software-interface","title":"Software Interface","text":"

    Autoware support ros-perception/radar_msgs/msg/RadarScan.msg and autoware_auto_perception_msgs/msg/TrackedObjects.msg for Radar drivers.

    ROS driver receive the data from radar devices.

    • Scan (pointcloud)
    • Tracked objects
    • Diagnostics results

    The ROS drivers receives the following data from the radar device.

    • Time synchronization
    • Ego vehicle status, which often need for tracked objects calculation
    "},{"location":"design/autoware-architecture/sensing/data-types/radar-data/reference-implementations/device-driver/#radar-driver-example","title":"Radar driver example","text":"
    • ARS408 driver
    "},{"location":"design/autoware-architecture/vehicle/","title":"Vehicle Interface design","text":""},{"location":"design/autoware-architecture/vehicle/#vehicle-interface-design","title":"Vehicle Interface design","text":""},{"location":"design/autoware-architecture/vehicle/#abstract","title":"Abstract","text":"

    The Vehicle Interface component provides an interface between Autoware and a vehicle that passes control signals to the vehicle\u2019s drive-by-wire system and receives vehicle information that is passed back to Autoware.

    "},{"location":"design/autoware-architecture/vehicle/#1-requirements","title":"1. Requirements","text":"

    Goals:

    • The Vehicle Interface component converts Autoware commands to a vehicle-specific format and converts vehicle status in a vehicle-specific format to Autoware messages.
    • The interface between Autoware and the Vehicle component is abstracted and independent of hardware.
    • The interface is extensible such that additional vehicle-specific commands can be easily added. For example, headlight control.

    Non-goals:

    • Accuracy of responses from the vehicle will not be defined, but example accuracy requirements from reference designs are provided as examples.
    • Response speed will not be defined.
    "},{"location":"design/autoware-architecture/vehicle/#2-architecture","title":"2. Architecture","text":"

    The Vehicle Interface component consists of the following components:

    • A Raw Vehicle Command Converter component that will pass through vehicle commands from the Control component if velocity/acceleration control is supported by the drive-by-wire system. Otherwise, the Control commands will be modified according to the control method (eg: converting a target acceleration from the Control component to a vehicle specific accel/brake pedal value through the use of an acceleration map)
    • A Vehicle Interface component (vehicle specific) that acts as an interface between Autoware and a vehicle to communicate control signals and to obtain information about the vehicle (steer output, tyre angle etc)

    Each component contains static nodes of Autoware, while each module can be dynamically loaded and unloaded (corresponding to C++ classes). The mechanism of the Vehicle Interface component is depicted by the following figures:

    "},{"location":"design/autoware-architecture/vehicle/#3-features","title":"3. Features","text":"

    The Vehicle Interface component can provide the following features in functionality and capability:

    • Basic functions

      • Converting Autoware control commands to vehicle specific command
      • Converting vehicle specific status information (velocity, steering) to Autoware status message
    • Diagnostics
      • List available features
      • Provide a warning if the Control component tries to use a feature that is not available in the Vehicle Interface component

    Additional functionality and capability features may be added, depending on the vehicle hardware. Some example features are listed below:

    • Safety features
      • Disengage autonomous driving via manual intervention.
        • This can be done through the use of an emergency disengage button, or by a safety driver manually turning the steering wheel or pressing the brake
    • Optional controls
      • Turn indicator
      • Handbrake
      • Headlights
      • Hazard lights
      • Doors
      • Horn
      • Wipers
    "},{"location":"design/autoware-architecture/vehicle/#4-interface-and-data-structure","title":"4. Interface and Data Structure","text":"

    The interface of the Vehicle Interface component for other components running in the same process space to access the functionality and capability of the Vehicle Interface component is defined as follows.

    From Control

    • Actuation Command
      • target acceleration, braking, and steering angle

    From Planning

    • Vehicle Specific Commands (optional and a separate message for each type)
      • Shift
      • Door
      • Wiper
      • etc

    From the vehicle

    • Vehicle status messages
      • Vehicle-specific format messages for conversion into Autoware-specific format messages
        • Velocity status
        • Steering status (optional)
        • Shift status (optional)
        • Turn signal status (optional)
        • Actuation status (optional)

    The output interface of the Vehicle Interface component:

    • Vehicle control messages to the vehicle
      • Control signals to drive the vehicle
      • Depends on the vehicle type/protocol, but should include steering and velocity commands at a minimum
    • Vehicle status messages to Autoware
    • Actuation Status
      • Acceleration, brake, steering status
    • Vehicle odometry (output to Localization)
      • Vehicle twist information
    • Control mode
      • Information about whether the vehicle is under autonomous control or manual control
    • Shift status (optional)
      • Vehicle shift status
    • Turn signal status (optional)
      • Vehicle turn signal status

    The data structure for the internal representation of semantics for the objects and trajectories used in the Vehicle Interface component is defined as follows:

    "},{"location":"design/autoware-architecture/vehicle/#5-concerns-assumptions-and-limitations","title":"5. Concerns, Assumptions, and Limitations","text":"

    Concerns

    • Architectural trade-offs and scalability

    Assumptions

    -

    Limitations

    "},{"location":"design/autoware-architecture/vehicle/#6-examples-of-accuracy-requirements-by-odd","title":"6. Examples of accuracy requirements by ODD","text":""},{"location":"design/autoware-concepts/","title":"Autoware concepts","text":""},{"location":"design/autoware-concepts/#autoware-concepts","title":"Autoware concepts","text":"

    Autoware is the world\u2019s first open-source software for autonomous driving systems. Autoware provides value for both The technology developers of autonomous driving systems can create new components based on Autoware. The service operators of autonomous driving systems, on the other hand, can select appropriate technology components with Autoware. This is enabled by the microautonomy architecture that modularizes its software stack into the core and universe subsystems (modules).

    "},{"location":"design/autoware-concepts/#microautonomy-architecture","title":"Microautonomy architecture","text":"

    Autoware uses a pipeline architecture to enable the development of autonomous driving systems. The pipeline architecture used in Autoware consists of components similar to three-layer-architecture. And they run in parallel. There are 2 main modules: the Core and the Universe. The components in these modules are designed to be extensible and reusable. And we call it microautonomy architecture.

    "},{"location":"design/autoware-concepts/#the-core-module","title":"The Core module","text":"

    The Core module contains basic runtimes and technology components that satisfy the basic functionality and capability of sensing, computing, and actuation required for autonomous driving systems. AWF develops and maintains the Core module with their architects and leading members through their working groups. Anyone can contribute to the Core but the PR(Pull Request) acceptance criteria is more strict compared to the Universe.

    "},{"location":"design/autoware-concepts/#the-universe-module","title":"The Universe module","text":"

    The Universe modules are extensions to the Core module that can be provided by the technology developers to enhance the functionality and capability of sensing, computing, and actuation. AWF provides the base Universe module to extend from. A key feature of the microautonomy architecture is that the Universe modules can be contributed to by any organization and individual. That is, you can even create your Universe and make it available for the Autoware community and ecosystem. AWF is responsible for quality control of the Universe modules through their development process. As a result, there are multiple types of the Universe modules - some are verified and validated by AWF and others are not. It is up to the users of Autoware which Universe modules are selected and integrated to build their end applications.

    "},{"location":"design/autoware-concepts/#interface-design","title":"Interface design","text":"

    The interface design is the most essential piece of the microautonomy architecture, which is classified into internal and external interfaces. The component interface is designed for the components in a Universe module to communicate with those in other modules, including the Core module, within Autoware internally. The AD(Autonomous Driving) API, on the other hand, is designed for the applications of Autoware to access the technology components in the Core and Universe modules of Autoware externally. Designing solid interfaces, the microautonomy architecture is made possible with AWF's partners, and at the same time is made feasible for the partners.

    "},{"location":"design/autoware-concepts/#challenges","title":"Challenges","text":"

    A grand challenge of the microautonomy architecture is to achieve real-time capability, which guarantees all the technology components activated in the system to predictably meet timing constraints (given deadlines). In general, it is difficult, if not impossible, to tightly estimate the worst-case execution times (WCETs) of components.

    In addition, it is also difficult, if not impossible, to tightly estimate the end-to-end latency of components connected by a DAG. Autonomous driving systems based on the microautonomy architecture, therefore, must be designed to be fail-safe but not never-fail. We accept that the timing constraints may be violated (the given deadlines may be missed) as far as the overrun is taken into account. The overrun handlers are two-fold: (i) platform-defined and (ii) user-defined. The platform-defined handler is implemented as part of the platform by default, while the user-defined handler can overwrite it or add a new handler to the system. This is what we call \u201cfail-safe\u201d on a timely basis.

    "},{"location":"design/autoware-concepts/#requirements-and-roadmap","title":"Requirements and roadmap","text":"

    Goals:

    • All open-source
    • Use case driven
    • Real-time (predictable) framework with overrun handling
    • Code quality
    "},{"location":"design/autoware-concepts/difference-from-ai-and-auto/","title":"How is Autoware Core/Universe different from Autoware.AI and Autoware.Auto?","text":""},{"location":"design/autoware-concepts/difference-from-ai-and-auto/#how-is-autoware-coreuniverse-different-from-autowareai-and-autowareauto","title":"How is Autoware Core/Universe different from Autoware.AI and Autoware.Auto?","text":"

    Autoware is the world's first \"all-in-one\" open-source software for self-driving vehicles. Since it was first released in 2015, there have been multiple releases made with differing underlying concepts, each one aimed at improving the software.

    "},{"location":"design/autoware-concepts/difference-from-ai-and-auto/#autowareai","title":"Autoware.AI","text":"

    Autoware.AI is the first distribution of Autoware that was released based on ROS 1. The repository contains a variety of packages covering different aspects of autonomous driving technologies - sensing, actuation, localization, mapping, perception and planning.

    While it was successful in attracting many developers and contributions, it was difficult to improve Autoware.AI's capabilities for a number of reasons:

    • A lack of concrete architecture design leading to a lot of built-up technical debt, such as tight coupling between modules and unclear module responsibility.
    • Differing coding standards for each package, with very low test coverage.

    Furthermore, there was no clear definition of the conditions under which an Autoware-enabled autonomous vehicle could operate, nor of the use cases or situations supported (eg: the ability to overtake a stationary vehicle).

    From the lessons learned from Autoware.AI development, a different development process was taken for Autoware.Auto to develop a ROS 2 version of Autoware.

    Warning

    Autoware.AI is currently in maintenance mode and will reach end-of-life at the end of 2022.

    "},{"location":"design/autoware-concepts/difference-from-ai-and-auto/#autowareauto","title":"Autoware.Auto","text":"

    Autoware.Auto is the second distribution of Autoware that was released based on ROS 2. As part of the transition to ROS 2, it was decided to avoid simply porting Autoware.AI from ROS 1 to ROS 2. Instead, the codebase was rewritten from scratch with proper engineering practices, including defining target use cases and ODDs (eg: Autonomous Valet Parking [AVP], Cargo Delivery, etc.), designing a proper architecture, writing design documents and test code.

    Autoware.Auto development seemed to work fine initially, but after completing the AVP and and Cargo Delivery ODD projects, we started to see the following issues:

    • The barrier to new engineers was too high.
      • A lot of work was required to merge new features into Autoware.Auto, and so it was difficult for researchers and students to contribute to development.
      • As a consequence, most Autoware.Auto developers were from companies in the Autoware Foundation and so there were very few people who were able to add state-of-the-art features from research papers.
    • Making large-scale architecture changes was too difficult.
      • To try out experimental architecture, there was a very large overhead involved in keeping the main branch stable whilst also making sure that every change satisfied the continuous integration requirements.
    "},{"location":"design/autoware-concepts/difference-from-ai-and-auto/#autoware-coreuniverse","title":"Autoware Core/Universe","text":"

    In order to address the issues with Autoware.Auto development, the Autoware Foundation decided to create a new architecture called Autoware Core/Universe.

    Autoware Core carries over the original policy of Autoware.Auto to be a stable and well-tested codebase. Alongside Autoware Core is a new concept called Autoware Universe, which acts as an extension of Autoware Core with the following benefits:

    • Users can easily replace a Core component with a Universe equivalent in order to use more advanced features, such as a new Localization or Perception algorithm.
    • Code quality requirements for Universe are more relaxed to make it easier for new developers, students and researchers to contribute, but will still be stricter than the requirements for Autoware.AI.
    • Any advanced features added to Universe that are useful to the wider Autoware community will be reviewed and considered for potential inclusion in the main Autoware Core codebase.

    This way, the primary requirement of having a stable and safe autonomous driving system can be achieved, whilst simultaneously enabling access to state-of-the-art features created by third-party contributors. For more details about the design of Autoware Core/Universe, refer to the Autoware concepts documentation page.

    "},{"location":"design/autoware-interfaces/","title":"Autoware interface design","text":""},{"location":"design/autoware-interfaces/#autoware-interface-design","title":"Autoware interface design","text":""},{"location":"design/autoware-interfaces/#abstract","title":"Abstract","text":"

    Autoware defines three categories of interfaces. The first one is Autoware AD API for operating the vehicle from outside the autonomous driving system such as the Fleet Management System (FMS) and Human Machine Interface (HMI) for operators or passengers. The second one is Autoware component interface for components to communicate with each other. The last one is the local interface used inside the component.

    "},{"location":"design/autoware-interfaces/#concept","title":"Concept","text":"
    • Applications can operate multiple and various vehicles in a common way.

    • Applications are not affected by version updates and implementation changes.

    • Developers only need to know the interface to add new features and hardware.

    "},{"location":"design/autoware-interfaces/#requirements","title":"Requirements","text":"

    Goals:

    • AD API provides functionality to create the following applications:
      • Drive the vehicle on the route or drive to the requested positions in order.
      • Operate vehicle behavior such as starting and stopping.
      • Display or announce the vehicle status to operators, passengers, and people around.
      • Control vehicle devices such as doors.
      • Monitor the vehicle or drive it manually.
    • AD API provides stable and long-term specifications. This enables unified access to all vehicles.
    • AD API hides differences in version and implementation and absorbs the impact of changes.
    • AD API has a default implementation and can be applied to some simple ODDs with options.
    • The AD API implementation is extensible with the third-party components as long as it meets the specifications.
    • The component interface provides stable and medium-term specifications. This makes it easier to add components.
    • The component interface clarifies the public and private parts of a component and improves maintainability.
    • The component interface is extensible with the third-party design to improve the sub-components' reusability.

    Non-goals:

    • AD API does not cover security. Use it with other reliable methods.
    • The component interface is just a specification, it does not include an implementation.
    "},{"location":"design/autoware-interfaces/#architecture","title":"Architecture","text":"

    The components of Autoware are connected via the component interface. Each component uses the interface to provide functionality and to access other components. AD API implementation is also a component. Since the functional elements required for AD API are defined as the component interface, other components do not need to consider AD API directly. Tools for evaluation and debugging, such as simulators, access both AD API and the component interface.

    The component interface has a hierarchical specification. The top-level architecture consists of some components. Each component has some options of the next-level architecture. Developers select one of them when implementing the component. The simplest next-level architecture is monolithic. This is an all-in-one and black box implementation, and is suitable for small group development, prototyping, and very complex functions. Others are arbitrary architecture consists of sub-components and have advantages for large group development. A sub-component can be combined with others that adopt the same architecture. Third parties can define and publish their own architecture and interface for open source development. It is desirable to propose them for standardization if they are sufficiently evaluated.

    "},{"location":"design/autoware-interfaces/#features","title":"Features","text":""},{"location":"design/autoware-interfaces/#communication-methods","title":"Communication methods","text":"

    As shown in the table below, interfaces are classified into four communication methods to define their behavior. Function Call is a request-response communication and is used for processing that requires immediate results. The others are publish-subscribe communication. Notification is used to process data that changes with some event, typically a callback. Streams handle continuously changing data. Reliable Stream expects all data to arrive without loss, Realtime Stream expects the latest data to arrive with low delay.

    Communication Method ROS Implementation Optional Implementation Function Call Service HTTP Notification Topic (reliable, transient_local) MQTT (QoS=2, retain) Reliable Stream Topic (reliable, volatile) MQTT (QoS=2) Realtime Stream Topic (best_effort, volatile) MQTT (QoS=0)

    These methods are provided as services or topics of ROS since Autoware is developed using ROS and mainly communicates with its packages. On the other hand, FMS and HMI are often implemented without ROS, Autoware is also expected to communicate with applications that do not use ROS. It is wasteful for each of these applications to have an adapter for Autoware, and a more suitable means of communication is required. HTTP and MQTT are suggested as additional options because these protocols are widely used and can substitute the behavior of services and topics. In that case, text formats such as JSON where field names are repeated in an array of objects, are inefficient and it is necessary to consider the serialization.

    "},{"location":"design/autoware-interfaces/#naming-convention","title":"Naming convention","text":"

    The name of the interface must be /<component name>/api/<interface name>, where <component name> is the name of the component. For an AD API component, omit this part and start with /api. The <interface name> is an arbitrary string separated by slashes. Note that this rule causes a restriction that the namespace api must not be used as a name other than AD API and the component interface.

    The following are examples of correct interface names for AD API and the component interface:

    • /api/autoware/state
    • /api/autoware/engage
    • /planning/api/route/set
    • /vehicle/api/status

    The following are examples of incorrect interface names for AD API and the component interface:

    • /ad_api/autoware/state
    • /autoware/engage
    • /planning/route/set/api
    • /vehicle/my_api/status
    "},{"location":"design/autoware-interfaces/#logging","title":"Logging","text":"

    It is recommended to log the interface for analysis of vehicle behavior. If logging is needed, rosbag is available for topics, and use logger in rclcpp or rclpy for services. Typically, create a wrapper for service and client classes that logs when a service is called.

    "},{"location":"design/autoware-interfaces/#restrictions","title":"Restrictions","text":"

    For each API, consider the restrictions such as following and describe them if necessary.

    Services:

    • response time
    • pre-condition
    • post-condition
    • execution order
    • concurrent execution

    Topics:

    • recommended delay range
    • maximum delay
    • recommended frequency range
    • minimum frequency
    • default frequency
    "},{"location":"design/autoware-interfaces/#data-structure","title":"Data structure","text":""},{"location":"design/autoware-interfaces/#data-type-definition","title":"Data type definition","text":"

    Do not share the types in AD API unless they are obviously the same to avoid changes in one API affecting another. Also, implementation-dependent types, including the component interface, should not be used in AD API for the same reason. Use the type in AD API in implementation, or create the same type and copy the data to convert the type.

    "},{"location":"design/autoware-interfaces/#constants-and-enumeration","title":"Constants and enumeration","text":"

    Since ROS don't support enumeration, use constants instead. The default value of type such as zero and empty string should not be used to detect that a variable is unassigned. Alternatively, assign it a dedicated name to indicate that it is undefined. If one type has multiple enumerations, comment on the correspondence between constants and variables. Do not use enumeration values directly, as assignments are subject to change when the version is updated.

    "},{"location":"design/autoware-interfaces/#time-stamp","title":"Time stamp","text":"

    Clarify what the timestamp indicates. for example, send time, measurement time, update time, etc. Consider having multiple timestamps if necessary. Use std_msgs/msg/Header when using ROS transform. Also consider whether the header is common to all data, independent for each data, or additional timestamp is required.

    "},{"location":"design/autoware-interfaces/#request-header","title":"Request header","text":"

    Currently, there is no required header.

    "},{"location":"design/autoware-interfaces/#response-status","title":"Response status","text":"

    The interfaces whose communication method is Function Call use a common response status to unify the error format. These interfaces should include a variable of ResponseStatus with the name status in the response. See autoware_adapi_v1_msgs/msg/ResponseStatus for details.

    "},{"location":"design/autoware-interfaces/#concerns-assumptions-and-limitations","title":"Concerns, assumptions and limitations","text":"
    • The applications use the version information provided by AD API to check compatibility. Unknown versions are also treated as available as long as the major versions match (excluding major version 0). Compatibility between AD API and the component interface is assumed to be maintained by the version management system.
    • If an unintended behavior of AD API is detected, the application should take appropriate action. Autoware tries to keep working as long as possible, but it is not guaranteed to be safe. Safety should be considered for the entire system, including the applications.
    "},{"location":"design/autoware-interfaces/ad-api/","title":"Autoware AD API","text":""},{"location":"design/autoware-interfaces/ad-api/#autoware-ad-api","title":"Autoware AD API","text":""},{"location":"design/autoware-interfaces/ad-api/#overview","title":"Overview","text":"

    Autoware AD API is the interface for operating the vehicle from outside the autonomous driving system. See here for the overall interface design of Autoware.

    "},{"location":"design/autoware-interfaces/ad-api/#user-stories","title":"User stories","text":"

    The user stories are service scenarios that AD API assumes. AD API is designed based on these scenarios. Each scenario is realized by a combination of use cases described later. If there are scenarios that cannot be covered, please discuss adding a user story.

    • Bus service
    • Taxi service
    "},{"location":"design/autoware-interfaces/ad-api/#use-cases","title":"Use cases","text":"

    Use cases are partial scenarios derived from the user story and generically designed. Service providers can combine these use cases to define user stories and check if AD API can be applied to their own scenarios.

    • Launch and terminate
    • Initialize the pose
    • Change the operation mode
    • Drive to the designated position
    • Get on and get off
    • Vehicle monitoring
    • Vehicle operation
    • System monitoring
    • Manual control
    "},{"location":"design/autoware-interfaces/ad-api/#features","title":"Features","text":"
    • Interface
    • Operation Mode
    • Routing
    • Localization
    • Motion
    • Planning
    • Perception
    • Fail-safe
    • Vehicle status
    • Vehicle doors
    • Cooperation
    • Heartbeat
    • Diagnostics
    • Manual control
    "},{"location":"design/autoware-interfaces/ad-api/release/","title":"Release notes","text":""},{"location":"design/autoware-interfaces/ad-api/release/#release-notes","title":"Release notes","text":""},{"location":"design/autoware-interfaces/ad-api/release/#v160-not-released","title":"v1.6.0 (Not released)","text":"
    • [Change] Fix communication method of /api/vehicle/status
    • [Change] Add options to the routing API
    • [Change] Add restrictions to /api/routing/clear_route
    • [Change] Add restrictions to /api/vehicle/doors/command
    "},{"location":"design/autoware-interfaces/ad-api/release/#v150","title":"v1.5.0","text":"
    • [New] Add /api/routing/change_route_points
    • [New] Add /api/routing/change_route
    "},{"location":"design/autoware-interfaces/ad-api/release/#v140","title":"v1.4.0","text":"
    • [New] Add /api/vehicle/status
    "},{"location":"design/autoware-interfaces/ad-api/release/#v130","title":"v1.3.0","text":"
    • [New] Add heartbeat API
    • [New] Add diagnostics API
    "},{"location":"design/autoware-interfaces/ad-api/release/#v120","title":"v1.2.0","text":"
    • [New] Add vehicle doors API
    • [Change] Add pull-over constants to /api/fail_safe/mrm_state
    "},{"location":"design/autoware-interfaces/ad-api/release/#v110","title":"v1.1.0","text":"
    • [New] Add /api/fail_safe/mrm_state
    • [New] Add /api/vehicle/dimensions
    • [New] Add /api/vehicle/kinematics
    • [Change] Add options to the routing API
    "},{"location":"design/autoware-interfaces/ad-api/release/#v100","title":"v1.0.0","text":"
    • [New] Add interface API
    • [New] Add localization API
    • [New] Add routing API
    • [New] Add operation mode API
    "},{"location":"design/autoware-interfaces/ad-api/features/cooperation/","title":"Cooperation","text":""},{"location":"design/autoware-interfaces/ad-api/features/cooperation/#cooperation","title":"Cooperation","text":""},{"location":"design/autoware-interfaces/ad-api/features/cooperation/#related-api","title":"Related API","text":"
    • /api/planning/velocity_factors
    • /api/planning/steering_factors
    • /api/planning/cooperation/set_commands
    • /api/planning/cooperation/set_policies
    • /api/planning/cooperation/get_policies
    "},{"location":"design/autoware-interfaces/ad-api/features/cooperation/#description","title":"Description","text":"

    Request to cooperate (RTC) is a feature that enables a human operator to support the decision in autonomous driving mode. Autoware usually drives the vehicle using its own decisions, but the operator may prefer to make their decisions in experiments and complex situations.

    The planning component manages each situation that requires decision as a scene. Each scene has an ID that doesn't change until the scene is completed or canceled. The operator can override the decision of the target scene using this ID. In practice, the user interface application can hides the specification of the ID and provides an abstracted interface to the operator.

    For example, in the situation in the diagram below, vehicle is expected to make two lane changes and turning left at the intersection. Therefore the planning component generates three scene instances for each required action, and each scene instance will wait for the decision to be made, in this case \"changing or keeping lane\" and \"turning left or waiting at the intersection\". Here Autoware decides not to change lanes a second time due to the obstacle, so the vehicle will stop there. However, operator could overwrite that decision through RTC function and force the lane change so that vehicle could reach to it's goal. Using RTC, the operator can override these decisions to continue driving the vehicle to the goal.

    "},{"location":"design/autoware-interfaces/ad-api/features/cooperation/#architecture","title":"Architecture","text":"

    Modules that support RTC have the operator decision and cooperation policy in addition to the module decision as shown below. These modules use the merged decision that is determined by these values when planning vehicle behavior. See decisions section for details of these values. The cooperation policy is used when there is no operator decision and has a default value set by the system settings. If the module supports RTC, these information are available in velocity factors or steering factors as cooperation status.

    "},{"location":"design/autoware-interfaces/ad-api/features/cooperation/#sequence","title":"Sequence","text":"

    This is an example sequence that overrides the scene decision to force a lane change. It is for the second scene in the diagram in the architecture section. Here let's assume the cooperation policy is set to optional, see the decisions section described later for details.

    1. A planning module creates a scene instance with unique ID when approaching a place where a lane change is needed.
    2. The scene instance generates the module decision from the current situation. In this case, the module decision is not to do a lane change due to the obstacle.
    3. The scene instance generates the merged decision. At this point, there is no operator decision yet, so it is based on the module decision.
    4. The scene instance plans the vehicle to keep the lane according to the merged decision.
    5. The scene instance sends a cooperation status.
    6. The operator receives the cooperation status.
    7. The operator sends a cooperation command to override the module decision and to do a lane change.
    8. The scene instance receives the cooperation command and update the operator decision.
    9. The scene instance updates the module decision from the current situation.
    10. The scene instance updates the merged decision. It is based on the operator decision received.
    11. The scene instance plans the vehicle to change the lane according to the merged decision.
    "},{"location":"design/autoware-interfaces/ad-api/features/cooperation/#decisions","title":"Decisions","text":"

    The merged decision is determined by the module decision, operator decision, and cooperation policy, each of which takes the value shown in the table below.

    Status Values merged decision deactivate, activate module decision deactivate, activate operator decision deactivate, activate, autonomous, none cooperation policy required, optional

    The meanings of these values are as follows. Note that the cooperation policy is common per module, so changing it will affect all scenes in the same module.

    Value Description deactivate An operator/module decision to plan vehicle behavior with priority on safety. activate An operator/module decision to plan vehicle behavior with priority on driving. autonomous An operator decision that follows the module decision. none An initial value for operator decision, indicating that there is no operator decision yet. required A policy that requires the operator decision to continue driving. optional A policy that does not require the operator decision to continue driving.

    The following flow is how the merged decision is determined.

    "},{"location":"design/autoware-interfaces/ad-api/features/cooperation/#examples","title":"Examples","text":"

    This is an example of cooperation for lane change module. The behaviors by the combination of decisions are as follows.

    Operator decision Policy Module decision Description deactivate - - The operator instructs to keep lane regardless the module decision. So the vehicle keeps the lane by the operator decision. activate - - The operator instructs to change lane regardless the module decision. So the vehicle changes the lane by the operator decision. autonomous - deactivate The operator instructs to follow the module decision. So the vehicle keeps the lane by the module decision. autonomous - activate The operator instructs to follow the module decision. So the vehicle changes the lane by the module decision. none required - The required policy is used because no operator instruction. So the vehicle keeps the lane by the cooperation policy. none optional deactivate The optional policy is used because no operator instruction. So the vehicle keeps the lane by the module decision. none optional activate The optional policy is used because no operator instruction. So the vehicle change the lane by the module decision."},{"location":"design/autoware-interfaces/ad-api/features/diagnostics/","title":"Diagnostics","text":""},{"location":"design/autoware-interfaces/ad-api/features/diagnostics/#diagnostics","title":"Diagnostics","text":""},{"location":"design/autoware-interfaces/ad-api/features/diagnostics/#related-api","title":"Related API","text":"
    • /api/system/diagnostics/struct
    • /api/system/diagnostics/status
    "},{"location":"design/autoware-interfaces/ad-api/features/diagnostics/#description","title":"Description","text":"

    This API provides a diagnostic graph consisting of error levels for functional units of Autoware. The system groups functions into arbitrary units depending on configuration and diagnoses their error levels. Each functional unit has dependencies, so the whole looks like a fault tree analysis (FTA). In practice, it becomes a directed acyclic graph (DAG) because multiple parents may share the same child. Below is an example of the diagnostics provided by this API. The path in the diagram is an arbitrary string that describes the functional unit, and the level is its error level. For error level, the same value as diagnostic_msgs/msg/DiagnosticStatus is used.

    The diagnostics data has static and dynamic parts, so the API provides these separately for efficiency. Below is an example of a message that corresponds to the above diagram. The static part of the diagnostic is published only once as the DiagGraphStruct that contains nodes and links. The links specify dependencies between nodes by index into an array of nodes. The dynamic part of the diagnostic is published periodically as DiagGraphStatus. The status has an array of nodes of the same length as the struct, with the same index representing the same functional unit.

    "},{"location":"design/autoware-interfaces/ad-api/features/fail-safe/","title":"Fail-safe","text":""},{"location":"design/autoware-interfaces/ad-api/features/fail-safe/#fail-safe","title":"Fail-safe","text":""},{"location":"design/autoware-interfaces/ad-api/features/fail-safe/#related-api","title":"Related API","text":"
    • /api/fail_safe/mrm_state
    "},{"location":"design/autoware-interfaces/ad-api/features/fail-safe/#description","title":"Description","text":"

    This API manages the behavior related to the abnormality of the vehicle. It provides the state of Request to Intervene (RTI), Minimal Risk Maneuver (MRM) and Minimal Risk Condition (MRC). As shown below, Autoware has the gate to switch between the command during normal operation and the command during abnormal operation. For safety, Autoware switches the operation to MRM when an abnormality is detected. Since the required behavior differs depending on the situation, MRM is implemented in various places as a specific mode in a normal module or as an independent module. The fail-safe module selects the behavior of MRM according to the abnormality and switches the gate output to that command.

    "},{"location":"design/autoware-interfaces/ad-api/features/fail-safe/#states","title":"States","text":"

    The MRM state indicates whether MRM is operating. This state also provides success or failure. Generally, MRM will switch to another behavior if it fails.

    State Description NONE MRM is not operating. OPERATING MRM is operating because an abnormality has been detected. SUCCEEDED MRM succeeded. The vehicle is in a safe condition. FAILED MRM failed. The vehicle is still in an unsafe condition."},{"location":"design/autoware-interfaces/ad-api/features/fail-safe/#behavior","title":"Behavior","text":"

    There is a dependency between MRM behaviors. For example, it switches from a comfortable stop to a emergency stop, but not the other way around. This is service dependent. Autoware supports the following transitions by default.

    State Description NONE MRM is not operating or is operating but no special behavior is required. COMFORTABLE_STOP The vehicle will stop quickly with a comfortable deceleration. EMERGENCY_STOP The vehicle will stop immediately with as much deceleration as possible. PULL_OVER The vehicle will stop after moving to the side of the road."},{"location":"design/autoware-interfaces/ad-api/features/heartbeat/","title":"Heartbeat","text":""},{"location":"design/autoware-interfaces/ad-api/features/heartbeat/#heartbeat","title":"Heartbeat","text":""},{"location":"design/autoware-interfaces/ad-api/features/heartbeat/#related-api","title":"Related API","text":"
    • /api/system/heartbeat
    "},{"location":"design/autoware-interfaces/ad-api/features/heartbeat/#description","title":"Description","text":"

    This API is used to check whether applications and Autoware are communicating properly. The message contains timestamp and sequence number to check for communication delays, order, and losses.

    "},{"location":"design/autoware-interfaces/ad-api/features/interface/","title":"Interface","text":""},{"location":"design/autoware-interfaces/ad-api/features/interface/#interface","title":"Interface","text":""},{"location":"design/autoware-interfaces/ad-api/features/interface/#related-api","title":"Related API","text":"
    • /api/interface/version
    "},{"location":"design/autoware-interfaces/ad-api/features/interface/#description","title":"Description","text":"

    This API provides the interface version of the set of AD APIs. It follows Semantic Versioning in order to provide an intuitive understanding of the changes between versions.

    "},{"location":"design/autoware-interfaces/ad-api/features/localization/","title":"Localization","text":""},{"location":"design/autoware-interfaces/ad-api/features/localization/#localization","title":"Localization","text":""},{"location":"design/autoware-interfaces/ad-api/features/localization/#related-api","title":"Related API","text":"
    • /api/localization/initialization_state
    • /api/localization/initialize
    "},{"location":"design/autoware-interfaces/ad-api/features/localization/#description","title":"Description","text":"

    This API manages the initialization of localization. Autoware requires a global pose as the initial guess for localization.

    "},{"location":"design/autoware-interfaces/ad-api/features/localization/#states","title":"States","text":"State Description UNINITIALIZED Localization is not initialized. Waiting for a global pose as the initial guess. INITIALIZING Localization is initializing. INITIALIZED Localization is initialized. Initialization can be requested again if necessary."},{"location":"design/autoware-interfaces/ad-api/features/manual-control/","title":"Manual control","text":""},{"location":"design/autoware-interfaces/ad-api/features/manual-control/#manual-control","title":"Manual control","text":""},{"location":"design/autoware-interfaces/ad-api/features/manual-control/#related-api","title":"Related API","text":"
    • /api/remote/control_mode/list
    • /api/remote/control_mode/select
    • /api/remote/control_mode/status
    • /api/remote/operator/status
    • /api/remote/command/pedal
    • /api/remote/command/acceleration
    • /api/remote/command/steering
    • /api/remote/command/gear
    • /api/remote/command/turn_indicators
    • /api/remote/command/hazard_lights
    • /api/local/control_mode/list
    • /api/local/control_mode/select
    • /api/local/control_mode/status
    • /api/local/operator/status
    • /api/local/command/pedal
    • /api/local/command/acceleration
    • /api/local/command/steering
    • /api/local/command/gear
    • /api/local/command/turn_indicators
    • /api/local/command/hazard_lights
    "},{"location":"design/autoware-interfaces/ad-api/features/manual-control/#description","title":"Description","text":"

    This API is used to manually control the vehicle, and provides the same interface for different operators: remote and local. For example, the local operator controls a vehicle without a driver's seat using a joystick, while the remote operator provides remote support when problems occur with autonomous driving. The command sent is used in operation modes remote and local.

    "},{"location":"design/autoware-interfaces/ad-api/features/manual-control/#operator-status","title":"Operator status","text":"

    The application needs to determine whether the operator is able to drive and send that information via the operator status API. If the operator is unable to continue driving during manual operation, Autoware will perform MRM to bring the vehicle to a safe state. For level 3 and below, the operator status is referenced even during autonomous driving.

    "},{"location":"design/autoware-interfaces/ad-api/features/manual-control/#control-mode","title":"Control mode","text":"

    Since there are multiple ways to control a vehicle, such as pedals or acceleration, the application must first select a control mode.

    Mode Description disabled This is the initial mode. When selected, all command APIs are unavailable. pedal This mode provides longitudinal control using the pedals. acceleration This mode provides longitudinal control using the target acceleration."},{"location":"design/autoware-interfaces/ad-api/features/manual-control/#commands","title":"Commands","text":"

    The commands available in each mode are as follows.

    Command disabled pedal acceleration pedal - \u2713 - acceleration - - \u2713 steering - \u2713 \u2713 gear - \u2713 \u2713 turn_indicators - \u2713 \u2713 hazard_lights - \u2713 \u2713"},{"location":"design/autoware-interfaces/ad-api/features/motion/","title":"Motion","text":""},{"location":"design/autoware-interfaces/ad-api/features/motion/#motion","title":"Motion","text":""},{"location":"design/autoware-interfaces/ad-api/features/motion/#related-api","title":"Related API","text":"
    • /api/motion/state
    • /api/motion/accept_start
    "},{"location":"design/autoware-interfaces/ad-api/features/motion/#description","title":"Description","text":"

    This API manages the current behavior of the vehicle. Applications can notify the vehicle behavior to the people around and visualize it for operator and passengers.

    "},{"location":"design/autoware-interfaces/ad-api/features/motion/#states","title":"States","text":"

    The motion state manages the stop and start of the vehicle. Once the vehicle has stopped, the state will be STOPPED. After this, when the vehicle tries to start (is still stopped), the state will be STARTING. In this state, calling the start API changes the state to MOVING and the vehicle starts. This mechanism can add processing such as announcements before the vehicle starts. Depending on the configuration, the state may transition directly from STOPPED to MOVING.

    State Description STOPPED The vehicle is stopped. STARTING The vehicle is stopped, but is trying to start. MOVING The vehicle is moving. BRAKING (T.B.D.) The vehicle is decelerating strongly."},{"location":"design/autoware-interfaces/ad-api/features/operation_mode/","title":"Operation mode","text":""},{"location":"design/autoware-interfaces/ad-api/features/operation_mode/#operation-mode","title":"Operation mode","text":""},{"location":"design/autoware-interfaces/ad-api/features/operation_mode/#related-api","title":"Related API","text":"
    • /api/operation_mode/state
    • /api/operation_mode/change_to_autonomous
    • /api/operation_mode/change_to_stop
    • /api/operation_mode/change_to_local
    • /api/operation_mode/change_to_remote
    • /api/operation_mode/enable_autoware_control
    • /api/operation_mode/disable_autoware_control
    "},{"location":"design/autoware-interfaces/ad-api/features/operation_mode/#description","title":"Description","text":"

    As shown below, Autoware assumes that the vehicle interface has two modes, Autoware control and direct control. In direct control mode, the vehicle is operated using devices such as steering and pedals. If the vehicle does not support direct control mode, it is always treated as Autoware control mode. Autoware control mode has four operation modes.

    Mode Description Stop Keep the vehicle stopped. Autonomous Autonomously control the vehicle. Local Manually control the vehicle from nearby with some device such as a joystick. Remote Manually control the vehicle from a web application on the cloud.

    "},{"location":"design/autoware-interfaces/ad-api/features/operation_mode/#states","title":"States","text":""},{"location":"design/autoware-interfaces/ad-api/features/operation_mode/#autoware-control-flag","title":"Autoware control flag","text":"

    The flag is_autoware_control_enabled indicates if the vehicle is controlled by Autoware. The enable and disable APIs can be used if the control can be switched by software. These APIs will always fail if the vehicle does not support mode switching or is switched by hardware.

    "},{"location":"design/autoware-interfaces/ad-api/features/operation_mode/#operation-mode-and-change-flags","title":"Operation mode and change flags","text":"

    The state operation_mode indicates what command is used when Autoware control is enabled. The flags change_to_* can be used to check if it is possible to transition to each mode.

    "},{"location":"design/autoware-interfaces/ad-api/features/operation_mode/#transition-flag","title":"Transition flag","text":"

    Since Autoware may not be able to guarantee safety, such as switching to autonomous mode during overspeed. There is the flag is_in_transition for this situation and it will be true when changing modes. The operator who changed the mode should ensure safety while this flag is true. The flag will be false when the mode change is complete.

    "},{"location":"design/autoware-interfaces/ad-api/features/perception/","title":"Perception","text":""},{"location":"design/autoware-interfaces/ad-api/features/perception/#perception","title":"Perception","text":""},{"location":"design/autoware-interfaces/ad-api/features/perception/#related-api","title":"Related API","text":"
    • /api/perception/objects
    "},{"location":"design/autoware-interfaces/ad-api/features/perception/#description","title":"Description","text":"

    API for perception related topic.

    "},{"location":"design/autoware-interfaces/ad-api/features/planning-factors/","title":"Planning factors","text":""},{"location":"design/autoware-interfaces/ad-api/features/planning-factors/#planning-factors","title":"Planning factors","text":""},{"location":"design/autoware-interfaces/ad-api/features/planning-factors/#related-api","title":"Related API","text":"
    • /api/planning/velocity_factors
    • /api/planning/steering_factors
    "},{"location":"design/autoware-interfaces/ad-api/features/planning-factors/#description","title":"Description","text":"

    This API manages the planned behavior of the vehicle. Applications can notify the vehicle behavior to the people around and visualize it for operator and passengers.

    "},{"location":"design/autoware-interfaces/ad-api/features/planning-factors/#velocity-factors","title":"Velocity factors","text":"

    The velocity factors is an array of information on the behavior that the vehicle stops or slows down. Each factor has a behavior type which is described below. Some behavior types have sequence and details as additional information.

    Behavior Description surrounding-obstacle There are obstacles immediately around the vehicle. route-obstacle There are obstacles along the route ahead. intersection There are obstacles in other lanes in the path. crosswalk There are obstacles on the crosswalk. rear-check There are obstacles behind that would be in a human driver's blind spot. user-defined-attention-area There are obstacles in the predefined attention area. no-stopping-area There is not enough space beyond the no stopping area. stop-sign A stop by a stop sign. traffic-signal A stop by a traffic signal. v2x-gate-area A stop by a gate area. It has enter and leave as sequences and v2x type as details. merge A stop before merging lanes. sidewalk A stop before crossing the sidewalk. lane-change A lane change. avoidance A path change to avoid an obstacle in the current lane. emergency-operation A stop by emergency instruction from the operator.

    Each factor also provides status, poses in the base link frame, and distance from that pose. As the vehicle approaches the stop position, this factor appears with a status of APPROACHING. And when the vehicle reaches that position and stops, the status will be STOPPED. The pose indicates the stop position, or the base link if the stop position cannot be calculated.

    "},{"location":"design/autoware-interfaces/ad-api/features/planning-factors/#steering-factors","title":"Steering factors","text":"

    The steering factors is an array of information on the maneuver that requires use of turn indicators, such as turning left or right. Each factor has a behavior type which is described below and steering direction. Some behavior types have sequence and details as additional information.

    Behavior Description intersection A turning left or right at an intersection. lane-change A lane change. avoidance A path change to avoid an obstacle. It has a sequence of change and return. start-planner T.B.D. goal-planner T.B.D. emergency-operation A path change by emergency instruction from the operator.

    Each factor also provides status, poses in the base link frame, and distances from that poses. As the vehicle approaches the position to start steering, this factor appears with a status of APPROACHING. And when the vehicle reaches that position, the status will be TURNING. The poses indicate the start and end position of the section where the status is TURNING.

    In cases such as lane change and avoidance, the vehicle will start steering at any position in the range depending on the situation. For these types, the section where the status is TURNING will be updated dynamically and the poses will follow that.

    "},{"location":"design/autoware-interfaces/ad-api/features/routing/","title":"Routing","text":""},{"location":"design/autoware-interfaces/ad-api/features/routing/#routing","title":"Routing","text":""},{"location":"design/autoware-interfaces/ad-api/features/routing/#related-api","title":"Related API","text":"
    • /api/routing/state
    • /api/routing/route
    • /api/routing/clear_route
    • /api/routing/set_route_points
    • /api/routing/set_route
    • /api/routing/change_route_points
    • /api/routing/change_route
    "},{"location":"design/autoware-interfaces/ad-api/features/routing/#description","title":"Description","text":"

    This API manages destination and waypoints. Note that waypoints are not like stops and just points passing through. In other words, Autoware does not support the route with multiple stops, the application needs to split it up and switch them. There are two ways to set the route. The one is a generic method that uses pose, another is a map-dependent.

    "},{"location":"design/autoware-interfaces/ad-api/features/routing/#states","title":"States","text":"State Description UNSET The route is not set. Waiting for a route request. SET The route is set. ARRIVED The vehicle has arrived at the destination. CHANGING Trying to change the route."},{"location":"design/autoware-interfaces/ad-api/features/routing/#options","title":"Options","text":"

    The route set and change APIs have route options that allow applications to choose several behaviors regarding route planning. See the sections below for supported options and details.

    "},{"location":"design/autoware-interfaces/ad-api/features/routing/#allow_goal_modification","title":"allow_goal_modification","text":"

    [v1.1.0] Autoware tries to look for an alternate goal when goal is unreachable (e.g., when there is an obstacle on the given goal). When setting a route from the API, applications can choose whether they allow Autoware to adjust goal pose in such situation. When set false, Autoware may get stuck until the given goal becomes reachable.

    "},{"location":"design/autoware-interfaces/ad-api/features/routing/#allow_while_using_route","title":"allow_while_using_route","text":"

    [v1.6.0] This option only affects the route change APIs. Autoware accepts new route even while the vehicle is using the current route. The APIs fail if it cannot safely transition to new route. When set false, the APIs always fail when the vehicle is using the route.

    "},{"location":"design/autoware-interfaces/ad-api/features/vehicle-doors/","title":"Vehicle doors","text":""},{"location":"design/autoware-interfaces/ad-api/features/vehicle-doors/#vehicle-doors","title":"Vehicle doors","text":""},{"location":"design/autoware-interfaces/ad-api/features/vehicle-doors/#related-api","title":"Related API","text":"
    • /api/vehicle/doors/layout
    • /api/vehicle/doors/status
    • /api/vehicle/doors/command
    "},{"location":"design/autoware-interfaces/ad-api/features/vehicle-doors/#description","title":"Description","text":"

    This feature is available if the vehicle provides a software interface for the doors. It can be used to create user interfaces for passengers or to control sequences at bus stops.

    "},{"location":"design/autoware-interfaces/ad-api/features/vehicle-doors/#layout","title":"Layout","text":"

    Each door in a vehicle is assigned an array index. This assignment is vehicle dependent. The layout API returns this information. The description field is a string to display in the user interface, etc. This is an arbitrary string and is not recommended to use for processing in applications. Use the roles field to know doors for getting on and off. Below is an example of the information returned by the layout API.

    Index Description Roles 0 front right - 1 front left GET_ON 2 rear right GET_OFF 3 rear left GET_ON, GET_OFF"},{"location":"design/autoware-interfaces/ad-api/features/vehicle-doors/#status","title":"Status","text":"

    The status API provides an array of door status. This array order is consistent with the layout API.

    "},{"location":"design/autoware-interfaces/ad-api/features/vehicle-doors/#control","title":"Control","text":"

    Use the command API to control doors. Unlike the status and layout APIs, array index do not correspond to doors. The command has a field to specify the target door index.

    "},{"location":"design/autoware-interfaces/ad-api/features/vehicle-status/","title":"Vehicle status","text":""},{"location":"design/autoware-interfaces/ad-api/features/vehicle-status/#vehicle-status","title":"Vehicle status","text":""},{"location":"design/autoware-interfaces/ad-api/features/vehicle-status/#related-api","title":"Related API","text":"
    • /api/vehicle/kinematics
    • /api/vehicle/status
    • /api/vehicle/dimensions
    "},{"location":"design/autoware-interfaces/ad-api/features/vehicle-status/#kinematics","title":"Kinematics","text":"

    This is an estimate of the vehicle kinematics. The vehicle position is necessary for applications to schedule dispatches. Also, using velocity and acceleration, applications can find vehicles that need operator assistance, such as stuck or brake suddenly.

    "},{"location":"design/autoware-interfaces/ad-api/features/vehicle-status/#status","title":"Status","text":"

    This is the status provided by the vehicle. The indicators and steering are mainly used for visualization and remote control. The remaining energy can be also used for vehicle scheduling.

    "},{"location":"design/autoware-interfaces/ad-api/features/vehicle-status/#dimensions","title":"Dimensions","text":"

    The vehicle dimensions are used to know the actual distance between the vehicle and objects because the vehicle position in kinematics is the coordinates of the base link. This is necessary for visualization when supporting vehicles remotely.

    "},{"location":"design/autoware-interfaces/ad-api/list/","title":"List of Autoware AD API","text":""},{"location":"design/autoware-interfaces/ad-api/list/#list-of-autoware-ad-api","title":"List of Autoware AD API","text":"API Release /api/fail_safe/mrm_state v1.1.0 /api/interface/version v1.0.0 /api/local/command/acceleration not released /api/local/command/gear not released /api/local/command/hazard_lights not released /api/local/command/pedal not released /api/local/command/steering not released /api/local/command/turn_indicators not released /api/local/control_mode/list not released /api/local/control_mode/select not released /api/local/control_mode/status not released /api/local/operator/status not released /api/localization/initialization_state v1.0.0 /api/localization/initialize v1.0.0 /api/motion/accept_start not released /api/motion/state not released /api/operation_mode/change_to_autonomous v1.0.0 /api/operation_mode/change_to_local v1.0.0 /api/operation_mode/change_to_remote v1.0.0 /api/operation_mode/change_to_stop v1.0.0 /api/operation_mode/disable_autoware_control v1.0.0 /api/operation_mode/enable_autoware_control v1.0.0 /api/operation_mode/state v1.0.0 /api/perception/objects not released /api/planning/cooperation/get_policies not released /api/planning/cooperation/set_commands not released /api/planning/cooperation/set_policies not released /api/planning/steering_factors not released /api/planning/velocity_factors not released /api/remote/command/acceleration not released /api/remote/command/gear not released /api/remote/command/hazard_lights not released /api/remote/command/pedal not released /api/remote/command/steering not released /api/remote/command/turn_indicators not released /api/remote/control_mode/list not released /api/remote/control_mode/select not released /api/remote/control_mode/status not released /api/remote/operator/status not released /api/routing/change_route v1.5.0 /api/routing/change_route_points v1.5.0 /api/routing/clear_route v1.0.0 /api/routing/route v1.0.0 /api/routing/set_route v1.0.0 /api/routing/set_route_points v1.0.0 /api/routing/state v1.0.0 /api/system/diagnostics/status v1.3.0 /api/system/diagnostics/struct v1.3.0 /api/system/heartbeat v1.3.0 /api/vehicle/dimensions v1.1.0 /api/vehicle/doors/command v1.2.0 /api/vehicle/doors/layout v1.2.0 /api/vehicle/doors/status v1.2.0 /api/vehicle/kinematics v1.1.0 /api/vehicle/status v1.4.0"},{"location":"design/autoware-interfaces/ad-api/list/api/fail_safe/mrm_state/","title":"/api/fail_safe/mrm_state","text":""},{"location":"design/autoware-interfaces/ad-api/list/api/fail_safe/mrm_state/#apifail_safemrm_state","title":"/api/fail_safe/mrm_state","text":""},{"location":"design/autoware-interfaces/ad-api/list/api/fail_safe/mrm_state/#status","title":"Status","text":"
    • Latest Version: v1.1.0
    • Method: notification
    • Data Type: autoware_adapi_v1_msgs/msg/MrmState
    "},{"location":"design/autoware-interfaces/ad-api/list/api/fail_safe/mrm_state/#description","title":"Description","text":"

    Get the MRM state. For details, see the fail-safe.

    "},{"location":"design/autoware-interfaces/ad-api/list/api/fail_safe/mrm_state/#message","title":"Message","text":"Name Type Description state uint16 The state of MRM operation. behavior uint16 The currently selected behavior of MRM."},{"location":"design/autoware-interfaces/ad-api/list/api/interface/version/","title":"/api/interface/version","text":""},{"location":"design/autoware-interfaces/ad-api/list/api/interface/version/#apiinterfaceversion","title":"/api/interface/version","text":""},{"location":"design/autoware-interfaces/ad-api/list/api/interface/version/#status","title":"Status","text":"
    • Latest Version: v1.0.0
    • Method: function call
    • Data Type: autoware_adapi_version_msgs/srv/InterfaceVersion
    "},{"location":"design/autoware-interfaces/ad-api/list/api/interface/version/#description","title":"Description","text":"

    Get the interface version. The version follows Semantic Versioning.

    "},{"location":"design/autoware-interfaces/ad-api/list/api/interface/version/#request","title":"Request","text":"

    None

    "},{"location":"design/autoware-interfaces/ad-api/list/api/interface/version/#response","title":"Response","text":"Name Type Description major uint16 major version minor uint16 minor version patch uint16 patch version"},{"location":"design/autoware-interfaces/ad-api/list/api/local/command/acceleration/","title":"/api/local/command/acceleration","text":""},{"location":"design/autoware-interfaces/ad-api/list/api/local/command/acceleration/#apilocalcommandacceleration","title":"/api/local/command/acceleration","text":""},{"location":"design/autoware-interfaces/ad-api/list/api/local/command/acceleration/#status","title":"Status","text":"
    • Latest Version: not released
    • Method: realtime stream
    • Data Type: autoware_adapi_v1_msgs/msg/AccelerationCommand
    "},{"location":"design/autoware-interfaces/ad-api/list/api/local/command/acceleration/#description","title":"Description","text":"

    Sends acceleration command used in local operation mode. To use this API, select the corresponding mode as described in manual control.

    "},{"location":"design/autoware-interfaces/ad-api/list/api/local/command/acceleration/#message","title":"Message","text":"Name Type Description stamp builtin_interfaces/msg/Time Timestamp when this message was sent. acceleration float32 Target acceleration [m/s^2]."},{"location":"design/autoware-interfaces/ad-api/list/api/local/command/gear/","title":"/api/local/command/gear","text":""},{"location":"design/autoware-interfaces/ad-api/list/api/local/command/gear/#apilocalcommandgear","title":"/api/local/command/gear","text":""},{"location":"design/autoware-interfaces/ad-api/list/api/local/command/gear/#status","title":"Status","text":"
    • Latest Version: not released
    • Method: notification
    • Data Type: autoware_adapi_v1_msgs/msg/GearCommand
    "},{"location":"design/autoware-interfaces/ad-api/list/api/local/command/gear/#description","title":"Description","text":"

    Sends gear command used in local operation mode. To use this API, select the corresponding mode as described in manual control.

    "},{"location":"design/autoware-interfaces/ad-api/list/api/local/command/gear/#message","title":"Message","text":"Name Type Description stamp builtin_interfaces/msg/Time Timestamp when this message was sent. command autoware_adapi_v1_msgs/msg/Gear Target gear status."},{"location":"design/autoware-interfaces/ad-api/list/api/local/command/hazard_lights/","title":"/api/local/command/hazard_lights","text":""},{"location":"design/autoware-interfaces/ad-api/list/api/local/command/hazard_lights/#apilocalcommandhazard_lights","title":"/api/local/command/hazard_lights","text":""},{"location":"design/autoware-interfaces/ad-api/list/api/local/command/hazard_lights/#status","title":"Status","text":"
    • Latest Version: not released
    • Method: notification
    • Data Type: autoware_adapi_v1_msgs/msg/HazardLightsCommand
    "},{"location":"design/autoware-interfaces/ad-api/list/api/local/command/hazard_lights/#description","title":"Description","text":"

    Sends hazard lights command used in local operation mode. To use this API, select the corresponding mode as described in manual control.

    "},{"location":"design/autoware-interfaces/ad-api/list/api/local/command/hazard_lights/#message","title":"Message","text":"Name Type Description stamp builtin_interfaces/msg/Time Timestamp when this message was sent. command autoware_adapi_v1_msgs/msg/HazardLights Target hazard lights status."},{"location":"design/autoware-interfaces/ad-api/list/api/local/command/pedal/","title":"/api/local/command/pedal","text":""},{"location":"design/autoware-interfaces/ad-api/list/api/local/command/pedal/#apilocalcommandpedal","title":"/api/local/command/pedal","text":""},{"location":"design/autoware-interfaces/ad-api/list/api/local/command/pedal/#status","title":"Status","text":"
    • Latest Version: not released
    • Method: realtime stream
    • Data Type: autoware_adapi_v1_msgs/msg/PedalCommand
    "},{"location":"design/autoware-interfaces/ad-api/list/api/local/command/pedal/#description","title":"Description","text":"

    Sends pedal command used in local operation mode. The pedal value is the ratio with the maximum pedal depression being 1.0. To use this API, select the corresponding mode as described in manual control.

    "},{"location":"design/autoware-interfaces/ad-api/list/api/local/command/pedal/#message","title":"Message","text":"Name Type Description stamp builtin_interfaces/msg/Time Timestamp when this message was sent. accelerator float32 Target accelerator pedal ratio. brake float32 Target brake pedal ratio."},{"location":"design/autoware-interfaces/ad-api/list/api/local/command/steering/","title":"/api/local/command/steering","text":""},{"location":"design/autoware-interfaces/ad-api/list/api/local/command/steering/#apilocalcommandsteering","title":"/api/local/command/steering","text":""},{"location":"design/autoware-interfaces/ad-api/list/api/local/command/steering/#status","title":"Status","text":"
    • Latest Version: not released
    • Method: realtime stream
    • Data Type: autoware_adapi_v1_msgs/msg/SteeringCommand
    "},{"location":"design/autoware-interfaces/ad-api/list/api/local/command/steering/#description","title":"Description","text":"

    Send steering command to this API. To use this API, select the corresponding mode as described in manual control.

    "},{"location":"design/autoware-interfaces/ad-api/list/api/local/command/steering/#message","title":"Message","text":"Name Type Description stamp builtin_interfaces/msg/Time Timestamp when this message was sent. steering_tire_angle float32 Target steering tire angle [rad]."},{"location":"design/autoware-interfaces/ad-api/list/api/local/command/turn_indicators/","title":"/api/local/command/turn_indicators","text":""},{"location":"design/autoware-interfaces/ad-api/list/api/local/command/turn_indicators/#apilocalcommandturn_indicators","title":"/api/local/command/turn_indicators","text":""},{"location":"design/autoware-interfaces/ad-api/list/api/local/command/turn_indicators/#status","title":"Status","text":"
    • Latest Version: not released
    • Method: notification
    • Data Type: autoware_adapi_v1_msgs/msg/TurnIndicatorsCommand
    "},{"location":"design/autoware-interfaces/ad-api/list/api/local/command/turn_indicators/#description","title":"Description","text":"

    Sends turn indicators command used in local operation mode. To use this API, select the corresponding mode as described in manual control.

    "},{"location":"design/autoware-interfaces/ad-api/list/api/local/command/turn_indicators/#message","title":"Message","text":"Name Type Description stamp builtin_interfaces/msg/Time Timestamp when this message was sent. command autoware_adapi_v1_msgs/msg/TurnIndicators Target turn indicators status."},{"location":"design/autoware-interfaces/ad-api/list/api/local/control_mode/list/","title":"/api/local/control_mode/list","text":""},{"location":"design/autoware-interfaces/ad-api/list/api/local/control_mode/list/#apilocalcontrol_modelist","title":"/api/local/control_mode/list","text":""},{"location":"design/autoware-interfaces/ad-api/list/api/local/control_mode/list/#status","title":"Status","text":"
    • Latest Version: not released
    • Method: function call
    • Data Type: autoware_adapi_v1_msgs/srv/ListManualControlMode
    "},{"location":"design/autoware-interfaces/ad-api/list/api/local/control_mode/list/#description","title":"Description","text":"

    List the available manual control modes as described in manual control. The disabled mode is not included in the available modes.

    "},{"location":"design/autoware-interfaces/ad-api/list/api/local/control_mode/list/#request","title":"Request","text":"

    None

    "},{"location":"design/autoware-interfaces/ad-api/list/api/local/control_mode/list/#response","title":"Response","text":"Name Type Description status autoware_adapi_v1_msgs/msg/ResponseStatus response status modes autoware_adapi_v1_msgs/msg/ManualControlMode[] List of available modes."},{"location":"design/autoware-interfaces/ad-api/list/api/local/control_mode/select/","title":"/api/local/control_mode/select","text":""},{"location":"design/autoware-interfaces/ad-api/list/api/local/control_mode/select/#apilocalcontrol_modeselect","title":"/api/local/control_mode/select","text":""},{"location":"design/autoware-interfaces/ad-api/list/api/local/control_mode/select/#status","title":"Status","text":"
    • Latest Version: not released
    • Method: function call
    • Data Type: autoware_adapi_v1_msgs/srv/SelectManualControlMode
    "},{"location":"design/autoware-interfaces/ad-api/list/api/local/control_mode/select/#description","title":"Description","text":"

    Selects the manual control mode as described in manual control. This API fails while operation mode is local.

    "},{"location":"design/autoware-interfaces/ad-api/list/api/local/control_mode/select/#request","title":"Request","text":"Name Type Description mode autoware_adapi_v1_msgs/msg/ManualControlMode The manual control mode to be used."},{"location":"design/autoware-interfaces/ad-api/list/api/local/control_mode/select/#response","title":"Response","text":"Name Type Description status autoware_adapi_v1_msgs/msg/ResponseStatus response status"},{"location":"design/autoware-interfaces/ad-api/list/api/local/control_mode/status/","title":"/api/local/control_mode/status","text":""},{"location":"design/autoware-interfaces/ad-api/list/api/local/control_mode/status/#apilocalcontrol_modestatus","title":"/api/local/control_mode/status","text":""},{"location":"design/autoware-interfaces/ad-api/list/api/local/control_mode/status/#status","title":"Status","text":"
    • Latest Version: not released
    • Method: notification
    • Data Type: autoware_adapi_v1_msgs/msg/ManualControlModeStatus
    "},{"location":"design/autoware-interfaces/ad-api/list/api/local/control_mode/status/#description","title":"Description","text":"

    Get the current manual operation mode.

    "},{"location":"design/autoware-interfaces/ad-api/list/api/local/control_mode/status/#message","title":"Message","text":"Name Type Description stamp builtin_interfaces/msg/Time Timestamp when this message was sent."},{"location":"design/autoware-interfaces/ad-api/list/api/local/operator/status/","title":"/api/local/operator/status","text":""},{"location":"design/autoware-interfaces/ad-api/list/api/local/operator/status/#apilocaloperatorstatus","title":"/api/local/operator/status","text":""},{"location":"design/autoware-interfaces/ad-api/list/api/local/operator/status/#status","title":"Status","text":"
    • Latest Version: not released
    • Method: realtime stream
    • Data Type: autoware_adapi_v1_msgs/msg/ManualOperatorStatus
    "},{"location":"design/autoware-interfaces/ad-api/list/api/local/operator/status/#description","title":"Description","text":"

    The application needs to determine whether the operator is able to drive and send that information via this API. For details, see the manual control.

    "},{"location":"design/autoware-interfaces/ad-api/list/api/local/operator/status/#message","title":"Message","text":"Name Type Description stamp builtin_interfaces/msg/Time Timestamp when this message was sent. ready bool Whether the operator is able to continue driving."},{"location":"design/autoware-interfaces/ad-api/list/api/localization/initialization_state/","title":"/api/localization/initialization_state","text":""},{"location":"design/autoware-interfaces/ad-api/list/api/localization/initialization_state/#apilocalizationinitialization_state","title":"/api/localization/initialization_state","text":""},{"location":"design/autoware-interfaces/ad-api/list/api/localization/initialization_state/#status","title":"Status","text":"
    • Latest Version: v1.0.0
    • Method: notification
    • Data Type: autoware_adapi_v1_msgs/msg/LocalizationInitializationState
    "},{"location":"design/autoware-interfaces/ad-api/list/api/localization/initialization_state/#description","title":"Description","text":"

    Get the initialization state of localization. For details, see the localization.

    "},{"location":"design/autoware-interfaces/ad-api/list/api/localization/initialization_state/#message","title":"Message","text":"Name Type Description state uint16 A value of the localization initialization state."},{"location":"design/autoware-interfaces/ad-api/list/api/localization/initialize/","title":"/api/localization/initialize","text":""},{"location":"design/autoware-interfaces/ad-api/list/api/localization/initialize/#apilocalizationinitialize","title":"/api/localization/initialize","text":""},{"location":"design/autoware-interfaces/ad-api/list/api/localization/initialize/#status","title":"Status","text":"
    • Latest Version: v1.0.0
    • Method: function call
    • Data Type: autoware_adapi_v1_msgs/srv/InitializeLocalization
    "},{"location":"design/autoware-interfaces/ad-api/list/api/localization/initialize/#description","title":"Description","text":"

    Request to initialize localization. For details, see the localization.

    "},{"location":"design/autoware-interfaces/ad-api/list/api/localization/initialize/#request","title":"Request","text":"Name Type Description pose geometry_msgs/msg/PoseWithCovarianceStamped[<=1] A global pose as the initial guess. If omitted, the GNSS pose will be used."},{"location":"design/autoware-interfaces/ad-api/list/api/localization/initialize/#response","title":"Response","text":"Name Type Description status autoware_adapi_v1_msgs/msg/ResponseStatus response status"},{"location":"design/autoware-interfaces/ad-api/list/api/motion/accept_start/","title":"/api/motion/accept_start","text":""},{"location":"design/autoware-interfaces/ad-api/list/api/motion/accept_start/#apimotionaccept_start","title":"/api/motion/accept_start","text":""},{"location":"design/autoware-interfaces/ad-api/list/api/motion/accept_start/#status","title":"Status","text":"
    • Latest Version: not released
    • Method: function call
    • Data Type: autoware_adapi_v1_msgs/srv/AcceptStart
    "},{"location":"design/autoware-interfaces/ad-api/list/api/motion/accept_start/#description","title":"Description","text":"

    Accept the vehicle to start. This API can be used when the motion state is STARTING.

    "},{"location":"design/autoware-interfaces/ad-api/list/api/motion/accept_start/#request","title":"Request","text":"

    None

    "},{"location":"design/autoware-interfaces/ad-api/list/api/motion/accept_start/#response","title":"Response","text":"Name Type Description status autoware_adapi_v1_msgs/msg/ResponseStatus response status"},{"location":"design/autoware-interfaces/ad-api/list/api/motion/state/","title":"/api/motion/state","text":""},{"location":"design/autoware-interfaces/ad-api/list/api/motion/state/#apimotionstate","title":"/api/motion/state","text":""},{"location":"design/autoware-interfaces/ad-api/list/api/motion/state/#status","title":"Status","text":"
    • Latest Version: not released
    • Method: notification
    • Data Type: autoware_adapi_v1_msgs/msg/MotionState
    "},{"location":"design/autoware-interfaces/ad-api/list/api/motion/state/#description","title":"Description","text":"

    Get the motion state. For details, see the motion state.

    "},{"location":"design/autoware-interfaces/ad-api/list/api/motion/state/#message","title":"Message","text":"Name Type Description state uint16 A value of the motion state."},{"location":"design/autoware-interfaces/ad-api/list/api/operation_mode/change_to_autonomous/","title":"/api/operation_mode/change_to_autonomous","text":""},{"location":"design/autoware-interfaces/ad-api/list/api/operation_mode/change_to_autonomous/#apioperation_modechange_to_autonomous","title":"/api/operation_mode/change_to_autonomous","text":""},{"location":"design/autoware-interfaces/ad-api/list/api/operation_mode/change_to_autonomous/#status","title":"Status","text":"
    • Latest Version: v1.0.0
    • Method: function call
    • Data Type: autoware_adapi_v1_msgs/srv/ChangeOperationMode
    "},{"location":"design/autoware-interfaces/ad-api/list/api/operation_mode/change_to_autonomous/#description","title":"Description","text":"

    Change the operation mode to autonomous. For details, see the operation mode.

    "},{"location":"design/autoware-interfaces/ad-api/list/api/operation_mode/change_to_autonomous/#request","title":"Request","text":"

    None

    "},{"location":"design/autoware-interfaces/ad-api/list/api/operation_mode/change_to_autonomous/#response","title":"Response","text":"Name Type Description status autoware_adapi_v1_msgs/msg/ResponseStatus response status"},{"location":"design/autoware-interfaces/ad-api/list/api/operation_mode/change_to_local/","title":"/api/operation_mode/change_to_local","text":""},{"location":"design/autoware-interfaces/ad-api/list/api/operation_mode/change_to_local/#apioperation_modechange_to_local","title":"/api/operation_mode/change_to_local","text":""},{"location":"design/autoware-interfaces/ad-api/list/api/operation_mode/change_to_local/#status","title":"Status","text":"
    • Latest Version: v1.0.0
    • Method: function call
    • Data Type: autoware_adapi_v1_msgs/srv/ChangeOperationMode
    "},{"location":"design/autoware-interfaces/ad-api/list/api/operation_mode/change_to_local/#description","title":"Description","text":"

    Change the operation mode to local. For details, see the operation mode.

    "},{"location":"design/autoware-interfaces/ad-api/list/api/operation_mode/change_to_local/#request","title":"Request","text":"

    None

    "},{"location":"design/autoware-interfaces/ad-api/list/api/operation_mode/change_to_local/#response","title":"Response","text":"Name Type Description status autoware_adapi_v1_msgs/msg/ResponseStatus response status"},{"location":"design/autoware-interfaces/ad-api/list/api/operation_mode/change_to_remote/","title":"/api/operation_mode/change_to_remote","text":""},{"location":"design/autoware-interfaces/ad-api/list/api/operation_mode/change_to_remote/#apioperation_modechange_to_remote","title":"/api/operation_mode/change_to_remote","text":""},{"location":"design/autoware-interfaces/ad-api/list/api/operation_mode/change_to_remote/#status","title":"Status","text":"
    • Latest Version: v1.0.0
    • Method: function call
    • Data Type: autoware_adapi_v1_msgs/srv/ChangeOperationMode
    "},{"location":"design/autoware-interfaces/ad-api/list/api/operation_mode/change_to_remote/#description","title":"Description","text":"

    Change the operation mode to remote. For details, see the operation mode.

    "},{"location":"design/autoware-interfaces/ad-api/list/api/operation_mode/change_to_remote/#request","title":"Request","text":"

    None

    "},{"location":"design/autoware-interfaces/ad-api/list/api/operation_mode/change_to_remote/#response","title":"Response","text":"Name Type Description status autoware_adapi_v1_msgs/msg/ResponseStatus response status"},{"location":"design/autoware-interfaces/ad-api/list/api/operation_mode/change_to_stop/","title":"/api/operation_mode/change_to_stop","text":""},{"location":"design/autoware-interfaces/ad-api/list/api/operation_mode/change_to_stop/#apioperation_modechange_to_stop","title":"/api/operation_mode/change_to_stop","text":""},{"location":"design/autoware-interfaces/ad-api/list/api/operation_mode/change_to_stop/#status","title":"Status","text":"
    • Latest Version: v1.0.0
    • Method: function call
    • Data Type: autoware_adapi_v1_msgs/srv/ChangeOperationMode
    "},{"location":"design/autoware-interfaces/ad-api/list/api/operation_mode/change_to_stop/#description","title":"Description","text":"

    Change the operation mode to stop. For details, see the operation mode.

    "},{"location":"design/autoware-interfaces/ad-api/list/api/operation_mode/change_to_stop/#request","title":"Request","text":"

    None

    "},{"location":"design/autoware-interfaces/ad-api/list/api/operation_mode/change_to_stop/#response","title":"Response","text":"Name Type Description status autoware_adapi_v1_msgs/msg/ResponseStatus response status"},{"location":"design/autoware-interfaces/ad-api/list/api/operation_mode/disable_autoware_control/","title":"/api/operation_mode/disable_autoware_control","text":""},{"location":"design/autoware-interfaces/ad-api/list/api/operation_mode/disable_autoware_control/#apioperation_modedisable_autoware_control","title":"/api/operation_mode/disable_autoware_control","text":""},{"location":"design/autoware-interfaces/ad-api/list/api/operation_mode/disable_autoware_control/#status","title":"Status","text":"
    • Latest Version: v1.0.0
    • Method: function call
    • Data Type: autoware_adapi_v1_msgs/srv/ChangeOperationMode
    "},{"location":"design/autoware-interfaces/ad-api/list/api/operation_mode/disable_autoware_control/#description","title":"Description","text":"

    Disable vehicle control by Autoware. For details, see the operation mode. This API fails if the vehicle does not support mode change by software.

    "},{"location":"design/autoware-interfaces/ad-api/list/api/operation_mode/disable_autoware_control/#request","title":"Request","text":"

    None

    "},{"location":"design/autoware-interfaces/ad-api/list/api/operation_mode/disable_autoware_control/#response","title":"Response","text":"Name Type Description status autoware_adapi_v1_msgs/msg/ResponseStatus response status"},{"location":"design/autoware-interfaces/ad-api/list/api/operation_mode/enable_autoware_control/","title":"/api/operation_mode/enable_autoware_control","text":""},{"location":"design/autoware-interfaces/ad-api/list/api/operation_mode/enable_autoware_control/#apioperation_modeenable_autoware_control","title":"/api/operation_mode/enable_autoware_control","text":""},{"location":"design/autoware-interfaces/ad-api/list/api/operation_mode/enable_autoware_control/#status","title":"Status","text":"
    • Latest Version: v1.0.0
    • Method: function call
    • Data Type: autoware_adapi_v1_msgs/srv/ChangeOperationMode
    "},{"location":"design/autoware-interfaces/ad-api/list/api/operation_mode/enable_autoware_control/#description","title":"Description","text":"

    Enable vehicle control by Autoware. For details, see the operation mode. This API fails if the vehicle does not support mode change by software.

    "},{"location":"design/autoware-interfaces/ad-api/list/api/operation_mode/enable_autoware_control/#request","title":"Request","text":"

    None

    "},{"location":"design/autoware-interfaces/ad-api/list/api/operation_mode/enable_autoware_control/#response","title":"Response","text":"Name Type Description status autoware_adapi_v1_msgs/msg/ResponseStatus response status"},{"location":"design/autoware-interfaces/ad-api/list/api/operation_mode/state/","title":"/api/operation_mode/state","text":""},{"location":"design/autoware-interfaces/ad-api/list/api/operation_mode/state/#apioperation_modestate","title":"/api/operation_mode/state","text":""},{"location":"design/autoware-interfaces/ad-api/list/api/operation_mode/state/#status","title":"Status","text":"
    • Latest Version: v1.0.0
    • Method: notification
    • Data Type: autoware_adapi_v1_msgs/msg/OperationModeState
    "},{"location":"design/autoware-interfaces/ad-api/list/api/operation_mode/state/#description","title":"Description","text":"

    Get the operation mode state. For details, see the operation mode.

    "},{"location":"design/autoware-interfaces/ad-api/list/api/operation_mode/state/#message","title":"Message","text":"Name Type Description mode uint8 The selected command for Autoware control. is_autoware_control_enabled bool True if vehicle control by Autoware is enabled. is_in_transition bool True if the operation mode is in transition. is_stop_mode_available bool True if the operation mode can be changed to stop. is_autonomous_mode_available bool True if the operation mode can be changed to autonomous. is_local_mode_available bool True if the operation mode can be changed to local. is_remote_mode_available bool True if the operation mode can be changed to remote."},{"location":"design/autoware-interfaces/ad-api/list/api/perception/objects/","title":"/api/perception/objects","text":""},{"location":"design/autoware-interfaces/ad-api/list/api/perception/objects/#apiperceptionobjects","title":"/api/perception/objects","text":""},{"location":"design/autoware-interfaces/ad-api/list/api/perception/objects/#status","title":"Status","text":"
    • Latest Version: not released
    • Method: realtime stream
    • Data Type: autoware_adapi_v1_msgs/msg/DynamicObjectArray
    "},{"location":"design/autoware-interfaces/ad-api/list/api/perception/objects/#description","title":"Description","text":"

    Get the recognized objects array with label, shape, current position and predicted path For details, see the perception.

    "},{"location":"design/autoware-interfaces/ad-api/list/api/perception/objects/#message","title":"Message","text":"Name Type Description objects.id unique_identifier_msgs/msg/UUID The UUID of each object objects.existence_probability float64 The probability of the object exits objects.classification autoware_adapi_v1_msgs/msg/ObjectClassification[] The type of the object recognized and the confidence level objects.kinematics autoware_adapi_v1_msgs/msg/DynamicObjectKinematics Consist of the object pose, twist, acceleration and the predicted_paths objects.shape shape_msgs/msg/SolidPrimitive escribe the shape of the object with dimension, and polygon"},{"location":"design/autoware-interfaces/ad-api/list/api/planning/steering_factors/","title":"/api/planning/steering_factors","text":""},{"location":"design/autoware-interfaces/ad-api/list/api/planning/steering_factors/#apiplanningsteering_factors","title":"/api/planning/steering_factors","text":""},{"location":"design/autoware-interfaces/ad-api/list/api/planning/steering_factors/#status","title":"Status","text":"
    • Latest Version: not released
    • Method: realtime stream
    • Data Type: autoware_adapi_v1_msgs/msg/SteeringFactorArray
    "},{"location":"design/autoware-interfaces/ad-api/list/api/planning/steering_factors/#description","title":"Description","text":"

    Get the steering factors, sorted in ascending order of distance. For details, see the planning factors.

    "},{"location":"design/autoware-interfaces/ad-api/list/api/planning/steering_factors/#message","title":"Message","text":"Name Type Description factors.pose geometry_msgs/msg/Pose[2] The base link pose related to the steering factor. factors.distance float32[2] The distance from the base link to the above pose. factors.direction uint16 The direction of the steering factor. factors.status uint16 The status of the steering factor. factors.behavior string The behavior type of the steering factor. factors.sequence string The sequence type of the steering factor. factors.detail string The additional information of the steering factor. factors.cooperation autoware_adapi_v1_msgs/msg/CooperationStatus[<=1] The cooperation status if the module supports."},{"location":"design/autoware-interfaces/ad-api/list/api/planning/velocity_factors/","title":"/api/planning/velocity_factors","text":""},{"location":"design/autoware-interfaces/ad-api/list/api/planning/velocity_factors/#apiplanningvelocity_factors","title":"/api/planning/velocity_factors","text":""},{"location":"design/autoware-interfaces/ad-api/list/api/planning/velocity_factors/#status","title":"Status","text":"
    • Latest Version: not released
    • Method: realtime stream
    • Data Type: autoware_adapi_v1_msgs/msg/VelocityFactorArray
    "},{"location":"design/autoware-interfaces/ad-api/list/api/planning/velocity_factors/#description","title":"Description","text":"

    Get the velocity factors, sorted in ascending order of distance. For details, see the planning factors.

    "},{"location":"design/autoware-interfaces/ad-api/list/api/planning/velocity_factors/#message","title":"Message","text":"Name Type Description factors.pose geometry_msgs/msg/Pose The base link pose related to the velocity factor. factors.distance float32 The distance from the base link to the above pose. factors.status uint16 The status of the velocity factor. factors.behavior string The behavior type of the velocity factor. factors.sequence string The sequence type of the velocity factor. factors.detail string The additional information of the velocity factor. factors.cooperation autoware_adapi_v1_msgs/msg/CooperationStatus[<=1] The cooperation status if the module supports."},{"location":"design/autoware-interfaces/ad-api/list/api/planning/cooperation/get_policies/","title":"/api/planning/cooperation/get_policies","text":""},{"location":"design/autoware-interfaces/ad-api/list/api/planning/cooperation/get_policies/#apiplanningcooperationget_policies","title":"/api/planning/cooperation/get_policies","text":""},{"location":"design/autoware-interfaces/ad-api/list/api/planning/cooperation/get_policies/#status","title":"Status","text":"
    • Latest Version: not released
    • Method: function call
    • Data Type: autoware_adapi_v1_msgs/srv/GetCooperationPolicies
    "},{"location":"design/autoware-interfaces/ad-api/list/api/planning/cooperation/get_policies/#description","title":"Description","text":"

    Get the default decision that is used instead when the operator's decision is undecided. For details, see the cooperation.

    "},{"location":"design/autoware-interfaces/ad-api/list/api/planning/cooperation/get_policies/#request","title":"Request","text":"

    None

    "},{"location":"design/autoware-interfaces/ad-api/list/api/planning/cooperation/get_policies/#response","title":"Response","text":"Name Type Description status autoware_adapi_v1_msgs/msg/ResponseStatus response status policies.behavior string The type of the target behavior. policies.sequence string The type of the target sequence. policies.policy uint8 The type of the cooperation policy."},{"location":"design/autoware-interfaces/ad-api/list/api/planning/cooperation/set_commands/","title":"/api/planning/cooperation/set_commands","text":""},{"location":"design/autoware-interfaces/ad-api/list/api/planning/cooperation/set_commands/#apiplanningcooperationset_commands","title":"/api/planning/cooperation/set_commands","text":""},{"location":"design/autoware-interfaces/ad-api/list/api/planning/cooperation/set_commands/#status","title":"Status","text":"
    • Latest Version: not released
    • Method: function call
    • Data Type: autoware_adapi_v1_msgs/srv/SetCooperationCommands
    "},{"location":"design/autoware-interfaces/ad-api/list/api/planning/cooperation/set_commands/#description","title":"Description","text":"

    Set the operator's decision for cooperation. For details, see the cooperation.

    "},{"location":"design/autoware-interfaces/ad-api/list/api/planning/cooperation/set_commands/#request","title":"Request","text":"Name Type Description commands.uuid unique_identifier_msgs/msg/UUID The ID in the cooperation status. commands.cooperator autoware_adapi_v1_msgs/msg/CooperationDecision The operator's decision."},{"location":"design/autoware-interfaces/ad-api/list/api/planning/cooperation/set_commands/#response","title":"Response","text":"Name Type Description status autoware_adapi_v1_msgs/msg/ResponseStatus response status"},{"location":"design/autoware-interfaces/ad-api/list/api/planning/cooperation/set_policies/","title":"/api/planning/cooperation/set_policies","text":""},{"location":"design/autoware-interfaces/ad-api/list/api/planning/cooperation/set_policies/#apiplanningcooperationset_policies","title":"/api/planning/cooperation/set_policies","text":""},{"location":"design/autoware-interfaces/ad-api/list/api/planning/cooperation/set_policies/#status","title":"Status","text":"
    • Latest Version: not released
    • Method: function call
    • Data Type: autoware_adapi_v1_msgs/srv/SetCooperationPolicies
    "},{"location":"design/autoware-interfaces/ad-api/list/api/planning/cooperation/set_policies/#description","title":"Description","text":"

    Set the default decision that is used instead when the operator's decision is undecided. For details, see the cooperation.

    "},{"location":"design/autoware-interfaces/ad-api/list/api/planning/cooperation/set_policies/#request","title":"Request","text":"Name Type Description policies.behavior string The type of the target behavior. policies.sequence string The type of the target sequence. policies.policy uint8 The type of the cooperation policy."},{"location":"design/autoware-interfaces/ad-api/list/api/planning/cooperation/set_policies/#response","title":"Response","text":"Name Type Description status autoware_adapi_v1_msgs/msg/ResponseStatus response status"},{"location":"design/autoware-interfaces/ad-api/list/api/remote/command/acceleration/","title":"/api/remote/command/acceleration","text":""},{"location":"design/autoware-interfaces/ad-api/list/api/remote/command/acceleration/#apiremotecommandacceleration","title":"/api/remote/command/acceleration","text":""},{"location":"design/autoware-interfaces/ad-api/list/api/remote/command/acceleration/#status","title":"Status","text":"
    • Latest Version: not released
    • Method: realtime stream
    • Data Type: autoware_adapi_v1_msgs/msg/AccelerationCommand
    "},{"location":"design/autoware-interfaces/ad-api/list/api/remote/command/acceleration/#description","title":"Description","text":"

    Sends acceleration command used in remote operation mode. To use this API, select the corresponding mode as described in manual control.

    "},{"location":"design/autoware-interfaces/ad-api/list/api/remote/command/acceleration/#message","title":"Message","text":"Name Type Description stamp builtin_interfaces/msg/Time Timestamp when this message was sent. acceleration float32 Target acceleration [m/s^2]."},{"location":"design/autoware-interfaces/ad-api/list/api/remote/command/gear/","title":"/api/remote/command/gear","text":""},{"location":"design/autoware-interfaces/ad-api/list/api/remote/command/gear/#apiremotecommandgear","title":"/api/remote/command/gear","text":""},{"location":"design/autoware-interfaces/ad-api/list/api/remote/command/gear/#status","title":"Status","text":"
    • Latest Version: not released
    • Method: notification
    • Data Type: autoware_adapi_v1_msgs/msg/GearCommand
    "},{"location":"design/autoware-interfaces/ad-api/list/api/remote/command/gear/#description","title":"Description","text":"

    Sends gear command used in remote operation mode. To use this API, select the corresponding mode as described in manual control.

    "},{"location":"design/autoware-interfaces/ad-api/list/api/remote/command/gear/#message","title":"Message","text":"Name Type Description stamp builtin_interfaces/msg/Time Timestamp when this message was sent. command autoware_adapi_v1_msgs/msg/Gear Target gear status."},{"location":"design/autoware-interfaces/ad-api/list/api/remote/command/hazard_lights/","title":"/api/remote/command/hazard_lights","text":""},{"location":"design/autoware-interfaces/ad-api/list/api/remote/command/hazard_lights/#apiremotecommandhazard_lights","title":"/api/remote/command/hazard_lights","text":""},{"location":"design/autoware-interfaces/ad-api/list/api/remote/command/hazard_lights/#status","title":"Status","text":"
    • Latest Version: not released
    • Method: notification
    • Data Type: autoware_adapi_v1_msgs/msg/HazardLightsCommand
    "},{"location":"design/autoware-interfaces/ad-api/list/api/remote/command/hazard_lights/#description","title":"Description","text":"

    Sends hazard lights command used in remote operation mode. To use this API, select the corresponding mode as described in manual control.

    "},{"location":"design/autoware-interfaces/ad-api/list/api/remote/command/hazard_lights/#message","title":"Message","text":"Name Type Description stamp builtin_interfaces/msg/Time Timestamp when this message was sent. command autoware_adapi_v1_msgs/msg/HazardLights Target hazard lights status."},{"location":"design/autoware-interfaces/ad-api/list/api/remote/command/pedal/","title":"/api/remote/command/pedal","text":""},{"location":"design/autoware-interfaces/ad-api/list/api/remote/command/pedal/#apiremotecommandpedal","title":"/api/remote/command/pedal","text":""},{"location":"design/autoware-interfaces/ad-api/list/api/remote/command/pedal/#status","title":"Status","text":"
    • Latest Version: not released
    • Method: realtime stream
    • Data Type: autoware_adapi_v1_msgs/msg/PedalCommand
    "},{"location":"design/autoware-interfaces/ad-api/list/api/remote/command/pedal/#description","title":"Description","text":"

    Sends pedal command used in remote operation mode. The pedal value is the ratio with the maximum pedal depression being 1.0. To use this API, select the corresponding mode as described in manual control.

    "},{"location":"design/autoware-interfaces/ad-api/list/api/remote/command/pedal/#message","title":"Message","text":"Name Type Description stamp builtin_interfaces/msg/Time Timestamp when this message was sent. accelerator float32 Target accelerator pedal ratio. brake float32 Target brake pedal ratio."},{"location":"design/autoware-interfaces/ad-api/list/api/remote/command/steering/","title":"/api/remote/command/steering","text":""},{"location":"design/autoware-interfaces/ad-api/list/api/remote/command/steering/#apiremotecommandsteering","title":"/api/remote/command/steering","text":""},{"location":"design/autoware-interfaces/ad-api/list/api/remote/command/steering/#status","title":"Status","text":"
    • Latest Version: not released
    • Method: realtime stream
    • Data Type: autoware_adapi_v1_msgs/msg/SteeringCommand
    "},{"location":"design/autoware-interfaces/ad-api/list/api/remote/command/steering/#description","title":"Description","text":"

    Send steering command to this API. To use this API, select the corresponding mode as described in manual control.

    "},{"location":"design/autoware-interfaces/ad-api/list/api/remote/command/steering/#message","title":"Message","text":"Name Type Description stamp builtin_interfaces/msg/Time Timestamp when this message was sent. steering_tire_angle float32 Target steering tire angle [rad]."},{"location":"design/autoware-interfaces/ad-api/list/api/remote/command/turn_indicators/","title":"/api/remote/command/turn_indicators","text":""},{"location":"design/autoware-interfaces/ad-api/list/api/remote/command/turn_indicators/#apiremotecommandturn_indicators","title":"/api/remote/command/turn_indicators","text":""},{"location":"design/autoware-interfaces/ad-api/list/api/remote/command/turn_indicators/#status","title":"Status","text":"
    • Latest Version: not released
    • Method: notification
    • Data Type: autoware_adapi_v1_msgs/msg/TurnIndicatorsCommand
    "},{"location":"design/autoware-interfaces/ad-api/list/api/remote/command/turn_indicators/#description","title":"Description","text":"

    Sends turn indicators command used in remote operation mode. To use this API, select the corresponding mode as described in manual control.

    "},{"location":"design/autoware-interfaces/ad-api/list/api/remote/command/turn_indicators/#message","title":"Message","text":"Name Type Description stamp builtin_interfaces/msg/Time Timestamp when this message was sent. command autoware_adapi_v1_msgs/msg/TurnIndicators Target turn indicators status."},{"location":"design/autoware-interfaces/ad-api/list/api/remote/control_mode/list/","title":"/api/remote/control_mode/list","text":""},{"location":"design/autoware-interfaces/ad-api/list/api/remote/control_mode/list/#apiremotecontrol_modelist","title":"/api/remote/control_mode/list","text":""},{"location":"design/autoware-interfaces/ad-api/list/api/remote/control_mode/list/#status","title":"Status","text":"
    • Latest Version: not released
    • Method: function call
    • Data Type: autoware_adapi_v1_msgs/srv/ListManualControlMode
    "},{"location":"design/autoware-interfaces/ad-api/list/api/remote/control_mode/list/#description","title":"Description","text":"

    List the available manual control modes as described in manual control. The disabled mode is not included in the available modes.

    "},{"location":"design/autoware-interfaces/ad-api/list/api/remote/control_mode/list/#request","title":"Request","text":"

    None

    "},{"location":"design/autoware-interfaces/ad-api/list/api/remote/control_mode/list/#response","title":"Response","text":"Name Type Description status autoware_adapi_v1_msgs/msg/ResponseStatus response status modes autoware_adapi_v1_msgs/msg/ManualControlMode[] List of available modes."},{"location":"design/autoware-interfaces/ad-api/list/api/remote/control_mode/select/","title":"/api/remote/control_mode/select","text":""},{"location":"design/autoware-interfaces/ad-api/list/api/remote/control_mode/select/#apiremotecontrol_modeselect","title":"/api/remote/control_mode/select","text":""},{"location":"design/autoware-interfaces/ad-api/list/api/remote/control_mode/select/#status","title":"Status","text":"
    • Latest Version: not released
    • Method: function call
    • Data Type: autoware_adapi_v1_msgs/srv/SelectManualControlMode
    "},{"location":"design/autoware-interfaces/ad-api/list/api/remote/control_mode/select/#description","title":"Description","text":"

    Selects the manual control mode as described in manual control. This API fails while operation mode is remote.

    "},{"location":"design/autoware-interfaces/ad-api/list/api/remote/control_mode/select/#request","title":"Request","text":"Name Type Description mode autoware_adapi_v1_msgs/msg/ManualControlMode The manual control mode to be used."},{"location":"design/autoware-interfaces/ad-api/list/api/remote/control_mode/select/#response","title":"Response","text":"Name Type Description status autoware_adapi_v1_msgs/msg/ResponseStatus response status"},{"location":"design/autoware-interfaces/ad-api/list/api/remote/control_mode/status/","title":"/api/remote/control_mode/status","text":""},{"location":"design/autoware-interfaces/ad-api/list/api/remote/control_mode/status/#apiremotecontrol_modestatus","title":"/api/remote/control_mode/status","text":""},{"location":"design/autoware-interfaces/ad-api/list/api/remote/control_mode/status/#status","title":"Status","text":"
    • Latest Version: not released
    • Method: notification
    • Data Type: autoware_adapi_v1_msgs/msg/ManualControlModeStatus
    "},{"location":"design/autoware-interfaces/ad-api/list/api/remote/control_mode/status/#description","title":"Description","text":"

    Get the current manual operation mode.

    "},{"location":"design/autoware-interfaces/ad-api/list/api/remote/control_mode/status/#message","title":"Message","text":"Name Type Description stamp builtin_interfaces/msg/Time Timestamp when this message was sent."},{"location":"design/autoware-interfaces/ad-api/list/api/remote/operator/status/","title":"/api/remote/operator/status","text":""},{"location":"design/autoware-interfaces/ad-api/list/api/remote/operator/status/#apiremoteoperatorstatus","title":"/api/remote/operator/status","text":""},{"location":"design/autoware-interfaces/ad-api/list/api/remote/operator/status/#status","title":"Status","text":"
    • Latest Version: not released
    • Method: realtime stream
    • Data Type: autoware_adapi_v1_msgs/msg/ManualOperatorStatus
    "},{"location":"design/autoware-interfaces/ad-api/list/api/remote/operator/status/#description","title":"Description","text":"

    The application needs to determine whether the operator is able to drive and send that information via this API. For details, see the manual control.

    "},{"location":"design/autoware-interfaces/ad-api/list/api/remote/operator/status/#message","title":"Message","text":"Name Type Description stamp builtin_interfaces/msg/Time Timestamp when this message was sent. ready bool Whether the operator is able to continue driving."},{"location":"design/autoware-interfaces/ad-api/list/api/routing/change_route/","title":"/api/routing/change_route","text":""},{"location":"design/autoware-interfaces/ad-api/list/api/routing/change_route/#apiroutingchange_route","title":"/api/routing/change_route","text":""},{"location":"design/autoware-interfaces/ad-api/list/api/routing/change_route/#status","title":"Status","text":"
    • Latest Version: v1.5.0
    • Method: function call
    • Data Type: autoware_adapi_v1_msgs/srv/SetRoute
    "},{"location":"design/autoware-interfaces/ad-api/list/api/routing/change_route/#description","title":"Description","text":"

    Same as /api/routing/set_route, but change the route while driving. This API only accepts the route when the route state is SET. In any other state, set the route first or wait for the route change to complete.

    "},{"location":"design/autoware-interfaces/ad-api/list/api/routing/change_route/#request","title":"Request","text":"Name Type Description header std_msgs/msg/Header header for pose transformation goal geometry_msgs/msg/Pose goal pose segments autoware_adapi_v1_msgs/msg/RouteSegment[] waypoint segments in lanelet format"},{"location":"design/autoware-interfaces/ad-api/list/api/routing/change_route/#response","title":"Response","text":"Name Type Description status autoware_adapi_v1_msgs/msg/ResponseStatus response status"},{"location":"design/autoware-interfaces/ad-api/list/api/routing/change_route_points/","title":"/api/routing/change_route_points","text":""},{"location":"design/autoware-interfaces/ad-api/list/api/routing/change_route_points/#apiroutingchange_route_points","title":"/api/routing/change_route_points","text":""},{"location":"design/autoware-interfaces/ad-api/list/api/routing/change_route_points/#status","title":"Status","text":"
    • Latest Version: v1.5.0
    • Method: function call
    • Data Type: autoware_adapi_v1_msgs/srv/SetRoutePoints
    "},{"location":"design/autoware-interfaces/ad-api/list/api/routing/change_route_points/#description","title":"Description","text":"

    Same as /api/routing/set_route_points, but change the route while driving. This API only accepts the route when the route state is SET. In any other state, set the route first or wait for the route change to complete.

    "},{"location":"design/autoware-interfaces/ad-api/list/api/routing/change_route_points/#request","title":"Request","text":"Name Type Description header std_msgs/msg/Header header for pose transformation goal geometry_msgs/msg/Pose goal pose waypoints geometry_msgs/msg/Pose[] waypoint poses"},{"location":"design/autoware-interfaces/ad-api/list/api/routing/change_route_points/#response","title":"Response","text":"Name Type Description status autoware_adapi_v1_msgs/msg/ResponseStatus response status"},{"location":"design/autoware-interfaces/ad-api/list/api/routing/clear_route/","title":"/api/routing/clear_route","text":""},{"location":"design/autoware-interfaces/ad-api/list/api/routing/clear_route/#apiroutingclear_route","title":"/api/routing/clear_route","text":""},{"location":"design/autoware-interfaces/ad-api/list/api/routing/clear_route/#status","title":"Status","text":"
    • Latest Version: v1.0.0
    • Method: function call
    • Data Type: autoware_adapi_v1_msgs/srv/ClearRoute
    "},{"location":"design/autoware-interfaces/ad-api/list/api/routing/clear_route/#description","title":"Description","text":"

    Clear the route. This API fails when the vehicle is using the route.

    "},{"location":"design/autoware-interfaces/ad-api/list/api/routing/clear_route/#request","title":"Request","text":"

    None

    "},{"location":"design/autoware-interfaces/ad-api/list/api/routing/clear_route/#response","title":"Response","text":"Name Type Description status autoware_adapi_v1_msgs/msg/ResponseStatus response status"},{"location":"design/autoware-interfaces/ad-api/list/api/routing/route/","title":"/api/routing/route","text":""},{"location":"design/autoware-interfaces/ad-api/list/api/routing/route/#apiroutingroute","title":"/api/routing/route","text":""},{"location":"design/autoware-interfaces/ad-api/list/api/routing/route/#status","title":"Status","text":"
    • Latest Version: v1.0.0
    • Method: notification
    • Data Type: autoware_adapi_v1_msgs/msg/Route
    "},{"location":"design/autoware-interfaces/ad-api/list/api/routing/route/#description","title":"Description","text":"

    Get the route with the waypoint segments in lanelet format. It is empty if route is not set.

    "},{"location":"design/autoware-interfaces/ad-api/list/api/routing/route/#message","title":"Message","text":"Name Type Description header std_msgs/msg/Header header for pose transformation data autoware_adapi_v1_msgs/msg/RouteData[<=1] The route in lanelet format"},{"location":"design/autoware-interfaces/ad-api/list/api/routing/set_route/","title":"/api/routing/set_route","text":""},{"location":"design/autoware-interfaces/ad-api/list/api/routing/set_route/#apiroutingset_route","title":"/api/routing/set_route","text":""},{"location":"design/autoware-interfaces/ad-api/list/api/routing/set_route/#status","title":"Status","text":"
    • Latest Version: v1.0.0
    • Method: function call
    • Data Type: autoware_adapi_v1_msgs/srv/SetRoute
    "},{"location":"design/autoware-interfaces/ad-api/list/api/routing/set_route/#description","title":"Description","text":"

    Set the route with the waypoint segments in lanelet format. If start pose is not specified, the current pose will be used. This API only accepts the route when the route state is UNSET. In any other state, clear the route first.

    "},{"location":"design/autoware-interfaces/ad-api/list/api/routing/set_route/#request","title":"Request","text":"Name Type Description header std_msgs/msg/Header header for pose transformation goal geometry_msgs/msg/Pose goal pose segments autoware_adapi_v1_msgs/msg/RouteSegment[] waypoint segments in lanelet format"},{"location":"design/autoware-interfaces/ad-api/list/api/routing/set_route/#response","title":"Response","text":"Name Type Description status autoware_adapi_v1_msgs/msg/ResponseStatus response status"},{"location":"design/autoware-interfaces/ad-api/list/api/routing/set_route_points/","title":"/api/routing/set_route_points","text":""},{"location":"design/autoware-interfaces/ad-api/list/api/routing/set_route_points/#apiroutingset_route_points","title":"/api/routing/set_route_points","text":""},{"location":"design/autoware-interfaces/ad-api/list/api/routing/set_route_points/#status","title":"Status","text":"
    • Latest Version: v1.0.0
    • Method: function call
    • Data Type: autoware_adapi_v1_msgs/srv/SetRoutePoints
    "},{"location":"design/autoware-interfaces/ad-api/list/api/routing/set_route_points/#description","title":"Description","text":"

    Set the route with the waypoint poses. If start pose is not specified, the current pose will be used. This API only accepts the route when the route state is UNSET. In any other state, clear the route first.

    "},{"location":"design/autoware-interfaces/ad-api/list/api/routing/set_route_points/#request","title":"Request","text":"Name Type Description header std_msgs/msg/Header header for pose transformation goal geometry_msgs/msg/Pose goal pose waypoints geometry_msgs/msg/Pose[] waypoint poses"},{"location":"design/autoware-interfaces/ad-api/list/api/routing/set_route_points/#response","title":"Response","text":"Name Type Description status autoware_adapi_v1_msgs/msg/ResponseStatus response status"},{"location":"design/autoware-interfaces/ad-api/list/api/routing/state/","title":"/api/routing/state","text":""},{"location":"design/autoware-interfaces/ad-api/list/api/routing/state/#apiroutingstate","title":"/api/routing/state","text":""},{"location":"design/autoware-interfaces/ad-api/list/api/routing/state/#status","title":"Status","text":"
    • Latest Version: v1.0.0
    • Method: notification
    • Data Type: autoware_adapi_v1_msgs/msg/RouteState
    "},{"location":"design/autoware-interfaces/ad-api/list/api/routing/state/#description","title":"Description","text":"

    Get the route state. For details, see the routing.

    "},{"location":"design/autoware-interfaces/ad-api/list/api/routing/state/#message","title":"Message","text":"Name Type Description state uint16 A value of the route state."},{"location":"design/autoware-interfaces/ad-api/list/api/system/heartbeat/","title":"/api/system/heartbeat","text":""},{"location":"design/autoware-interfaces/ad-api/list/api/system/heartbeat/#apisystemheartbeat","title":"/api/system/heartbeat","text":""},{"location":"design/autoware-interfaces/ad-api/list/api/system/heartbeat/#status","title":"Status","text":"
    • Latest Version: v1.3.0
    • Method: realtime stream
    • Data Type: autoware_adapi_v1_msgs/msg/Heartbeat
    "},{"location":"design/autoware-interfaces/ad-api/list/api/system/heartbeat/#description","title":"Description","text":"

    The heartbeat frequency is 10 Hz.

    "},{"location":"design/autoware-interfaces/ad-api/list/api/system/heartbeat/#message","title":"Message","text":"Name Type Description stamp builtin_interfaces/msg/Time Timestamp in Autoware for delay checking. seq uint16 Sequence number for order verification, wraps at 65535."},{"location":"design/autoware-interfaces/ad-api/list/api/system/diagnostics/status/","title":"/api/system/diagnostics/status","text":""},{"location":"design/autoware-interfaces/ad-api/list/api/system/diagnostics/status/#apisystemdiagnosticsstatus","title":"/api/system/diagnostics/status","text":""},{"location":"design/autoware-interfaces/ad-api/list/api/system/diagnostics/status/#status","title":"Status","text":"
    • Latest Version: v1.3.0
    • Method: realtime stream
    • Data Type: autoware_adapi_v1_msgs/msg/DiagGraphStatus
    "},{"location":"design/autoware-interfaces/ad-api/list/api/system/diagnostics/status/#description","title":"Description","text":"

    This is the dynamic part of the diagnostics. If static data is published with new ID, ignore dynamic data with old ID. See diagnostics for details.

    "},{"location":"design/autoware-interfaces/ad-api/list/api/system/diagnostics/status/#message","title":"Message","text":"Name Type Description stamp builtin_interfaces/msg/Time Timestamp when this message was sent id string ID to check correspondence between struct and status. nodes autoware_adapi_v1_msgs/msg/DiagNodeStatus[] Dynamic data for nodes in diagnostic graph."},{"location":"design/autoware-interfaces/ad-api/list/api/system/diagnostics/struct/","title":"/api/system/diagnostics/struct","text":""},{"location":"design/autoware-interfaces/ad-api/list/api/system/diagnostics/struct/#apisystemdiagnosticsstruct","title":"/api/system/diagnostics/struct","text":""},{"location":"design/autoware-interfaces/ad-api/list/api/system/diagnostics/struct/#status","title":"Status","text":"
    • Latest Version: v1.3.0
    • Method: notification
    • Data Type: autoware_adapi_v1_msgs/msg/DiagGraphStruct
    "},{"location":"design/autoware-interfaces/ad-api/list/api/system/diagnostics/struct/#description","title":"Description","text":"

    This is the static part of the diagnostics. If static data is published with new ID, ignore dynamic data with old ID. See diagnostics for details.

    "},{"location":"design/autoware-interfaces/ad-api/list/api/system/diagnostics/struct/#message","title":"Message","text":"Name Type Description stamp builtin_interfaces/msg/Time Timestamp when this message was sent. id string ID to check correspondence between struct and status. nodes autoware_adapi_v1_msgs/msg/DiagNodeStruct[] Static data for nodes in diagnostic graph. links autoware_adapi_v1_msgs/msg/DiagLinkStruct[] Static data for links in diagnostic graph."},{"location":"design/autoware-interfaces/ad-api/list/api/vehicle/dimensions/","title":"/api/vehicle/dimensions","text":""},{"location":"design/autoware-interfaces/ad-api/list/api/vehicle/dimensions/#apivehicledimensions","title":"/api/vehicle/dimensions","text":""},{"location":"design/autoware-interfaces/ad-api/list/api/vehicle/dimensions/#status","title":"Status","text":"
    • Latest Version: v1.1.0
    • Method: function call
    • Data Type: autoware_adapi_v1_msgs/srv/GetVehicleDimensions
    "},{"location":"design/autoware-interfaces/ad-api/list/api/vehicle/dimensions/#description","title":"Description","text":"

    Get the vehicle dimensions. See here for the definition of each value.

    "},{"location":"design/autoware-interfaces/ad-api/list/api/vehicle/dimensions/#request","title":"Request","text":"

    None

    "},{"location":"design/autoware-interfaces/ad-api/list/api/vehicle/dimensions/#response","title":"Response","text":"Name Type Description status autoware_adapi_v1_msgs/msg/ResponseStatus response status dimensions autoware_adapi_v1_msgs/msg/VehicleDimensions vehicle dimensions"},{"location":"design/autoware-interfaces/ad-api/list/api/vehicle/kinematics/","title":"/api/vehicle/kinematics","text":""},{"location":"design/autoware-interfaces/ad-api/list/api/vehicle/kinematics/#apivehiclekinematics","title":"/api/vehicle/kinematics","text":""},{"location":"design/autoware-interfaces/ad-api/list/api/vehicle/kinematics/#status","title":"Status","text":"
    • Latest Version: v1.1.0
    • Method: realtime stream
    • Data Type: autoware_adapi_v1_msgs/msg/VehicleKinematics
    "},{"location":"design/autoware-interfaces/ad-api/list/api/vehicle/kinematics/#description","title":"Description","text":"

    Publish vehicle kinematics.

    "},{"location":"design/autoware-interfaces/ad-api/list/api/vehicle/kinematics/#message","title":"Message","text":"Name Type Description geographic_pose geographic_msgs/msg/GeoPointStamped The longitude and latitude of the vehicle. If the map uses local coordinates, it will not be available. pose geometry_msgs/msg/PoseWithCovarianceStamped The pose with covariance from the base link. twist geometry_msgs/msg/TwistWithCovarianceStamped Vehicle current twist with covariance. accel geometry_msgs/msg/AccelWithCovarianceStamped Vehicle current acceleration with covariance."},{"location":"design/autoware-interfaces/ad-api/list/api/vehicle/status/","title":"/api/vehicle/status","text":""},{"location":"design/autoware-interfaces/ad-api/list/api/vehicle/status/#apivehiclestatus","title":"/api/vehicle/status","text":""},{"location":"design/autoware-interfaces/ad-api/list/api/vehicle/status/#status","title":"Status","text":"
    • Latest Version: v1.4.0
    • Method: realtime stream
    • Data Type: autoware_adapi_v1_msgs/msg/VehicleStatus
    "},{"location":"design/autoware-interfaces/ad-api/list/api/vehicle/status/#description","title":"Description","text":"

    Publish vehicle state information.

    "},{"location":"design/autoware-interfaces/ad-api/list/api/vehicle/status/#message","title":"Message","text":"Name Type Description gear autoware_adapi_v1_msgs/msg/Gear Gear status. turn_indicators autoware_adapi_v1_msgs/msg/TurnIndicators Turn indicators status, only either left or right will be enabled. hazard_lights autoware_adapi_v1_msgs/msg/HazardLights Hazard lights status. steering_tire_angle float64 Vehicle current tire angle in radian."},{"location":"design/autoware-interfaces/ad-api/list/api/vehicle/doors/command/","title":"/api/vehicle/doors/command","text":""},{"location":"design/autoware-interfaces/ad-api/list/api/vehicle/doors/command/#apivehicledoorscommand","title":"/api/vehicle/doors/command","text":""},{"location":"design/autoware-interfaces/ad-api/list/api/vehicle/doors/command/#status","title":"Status","text":"
    • Latest Version: v1.2.0
    • Method: function call
    • Data Type: autoware_adapi_v1_msgs/srv/SetDoorCommand
    "},{"location":"design/autoware-interfaces/ad-api/list/api/vehicle/doors/command/#description","title":"Description","text":"

    Set the door command. This API is only available if the vehicle supports software door control. This API fails if the doors cannot be opened or closed safely.

    "},{"location":"design/autoware-interfaces/ad-api/list/api/vehicle/doors/command/#request","title":"Request","text":"Name Type Description doors.index uint32 The index of the target door. doors.command uint8 The command for the target door."},{"location":"design/autoware-interfaces/ad-api/list/api/vehicle/doors/command/#response","title":"Response","text":"Name Type Description status autoware_adapi_v1_msgs/msg/ResponseStatus response status"},{"location":"design/autoware-interfaces/ad-api/list/api/vehicle/doors/layout/","title":"/api/vehicle/doors/layout","text":""},{"location":"design/autoware-interfaces/ad-api/list/api/vehicle/doors/layout/#apivehicledoorslayout","title":"/api/vehicle/doors/layout","text":""},{"location":"design/autoware-interfaces/ad-api/list/api/vehicle/doors/layout/#status","title":"Status","text":"
    • Latest Version: v1.2.0
    • Method: function call
    • Data Type: autoware_adapi_v1_msgs/srv/GetDoorLayout
    "},{"location":"design/autoware-interfaces/ad-api/list/api/vehicle/doors/layout/#description","title":"Description","text":"

    Get the door layout. It is an array of roles and descriptions for each door.

    "},{"location":"design/autoware-interfaces/ad-api/list/api/vehicle/doors/layout/#request","title":"Request","text":"

    None

    "},{"location":"design/autoware-interfaces/ad-api/list/api/vehicle/doors/layout/#response","title":"Response","text":"Name Type Description status autoware_adapi_v1_msgs/msg/ResponseStatus response status doors.roles uint8[] The roles of the door in the service the vehicle provides. doors.description string The description of the door for display in the interface."},{"location":"design/autoware-interfaces/ad-api/list/api/vehicle/doors/status/","title":"/api/vehicle/doors/status","text":""},{"location":"design/autoware-interfaces/ad-api/list/api/vehicle/doors/status/#apivehicledoorsstatus","title":"/api/vehicle/doors/status","text":""},{"location":"design/autoware-interfaces/ad-api/list/api/vehicle/doors/status/#status","title":"Status","text":"
    • Latest Version: v1.2.0
    • Method: notification
    • Data Type: autoware_adapi_v1_msgs/msg/DoorStatusArray
    "},{"location":"design/autoware-interfaces/ad-api/list/api/vehicle/doors/status/#description","title":"Description","text":"

    The status of each door such as opened or closed.

    "},{"location":"design/autoware-interfaces/ad-api/list/api/vehicle/doors/status/#message","title":"Message","text":"Name Type Description doors.status uint8 current door status"},{"location":"design/autoware-interfaces/ad-api/stories/bus-service/","title":"User story of bus service","text":""},{"location":"design/autoware-interfaces/ad-api/stories/bus-service/#user-story-of-bus-service","title":"User story of bus service","text":""},{"location":"design/autoware-interfaces/ad-api/stories/bus-service/#overview","title":"Overview","text":"

    This user story is a bus service that goes around the designated stops.

    "},{"location":"design/autoware-interfaces/ad-api/stories/bus-service/#scenario","title":"Scenario","text":"Step Operation Use Case 1 Startup the autonomous driving system. Launch and terminate 2 Drive the vehicle from the garage to the waiting position. Change the operation mode 3 Enable autonomous control. Change the operation mode 4 Drive the vehicle to the next bus stop. Drive to the designated position 5 Get on and off the vehicle. Get on and get off 6 Return to step 4 unless it's the last bus stop. 7 Drive the vehicle to the waiting position. Drive to the designated position 8 Drive the vehicle from the waiting position to the garage. Change the operation mode 9 Shutdown the autonomous driving system. Launch and terminate"},{"location":"design/autoware-interfaces/ad-api/stories/taxi-service/","title":"User story of bus service","text":""},{"location":"design/autoware-interfaces/ad-api/stories/taxi-service/#user-story-of-bus-service","title":"User story of bus service","text":""},{"location":"design/autoware-interfaces/ad-api/stories/taxi-service/#overview","title":"Overview","text":"

    This user story is a taxi service that picks up passengers and drives them to their destination.

    "},{"location":"design/autoware-interfaces/ad-api/stories/taxi-service/#scenario","title":"Scenario","text":"Step Operation Use Case 1 Startup the autonomous driving system. Launch and terminate 2 Drive the vehicle from the garage to the waiting position. Change the operation mode 3 Enable autonomous control. Change the operation mode 4 Drive the vehicle to the position to pick up. Drive to the designated position 5 Get on the vehicle. Get on and get off 6 Drive the vehicle to the destination. Drive to the designated position 7 Get off the vehicle. Get on and get off 8 Drive the vehicle to the waiting position. Drive to the designated position 9 Return to step 4 if there is another request. 10 Drive the vehicle from the waiting position to the garage. Change the operation mode 11 Shutdown the autonomous driving system. Launch and terminate"},{"location":"design/autoware-interfaces/ad-api/types/","title":"Types of Autoware AD API","text":""},{"location":"design/autoware-interfaces/ad-api/types/#types-of-autoware-ad-api","title":"Types of Autoware AD API","text":"
    • autoware_adapi_v1_msgs/msg/AccelerationCommand
    • autoware_adapi_v1_msgs/msg/CooperationCommand
    • autoware_adapi_v1_msgs/msg/CooperationDecision
    • autoware_adapi_v1_msgs/msg/CooperationPolicy
    • autoware_adapi_v1_msgs/msg/CooperationStatus
    • autoware_adapi_v1_msgs/msg/DiagGraphStatus
    • autoware_adapi_v1_msgs/msg/DiagGraphStruct
    • autoware_adapi_v1_msgs/msg/DiagLinkStruct
    • autoware_adapi_v1_msgs/msg/DiagNodeStatus
    • autoware_adapi_v1_msgs/msg/DiagNodeStruct
    • autoware_adapi_v1_msgs/msg/DoorCommand
    • autoware_adapi_v1_msgs/msg/DoorLayout
    • autoware_adapi_v1_msgs/msg/DoorStatus
    • autoware_adapi_v1_msgs/msg/DoorStatusArray
    • autoware_adapi_v1_msgs/msg/DynamicObject
    • autoware_adapi_v1_msgs/msg/DynamicObjectArray
    • autoware_adapi_v1_msgs/msg/DynamicObjectKinematics
    • autoware_adapi_v1_msgs/msg/DynamicObjectPath
    • autoware_adapi_v1_msgs/msg/Gear
    • autoware_adapi_v1_msgs/msg/GearCommand
    • autoware_adapi_v1_msgs/msg/HazardLights
    • autoware_adapi_v1_msgs/msg/HazardLightsCommand
    • autoware_adapi_v1_msgs/msg/Heartbeat
    • autoware_adapi_v1_msgs/msg/LocalizationInitializationState
    • autoware_adapi_v1_msgs/msg/ManualControlMode
    • autoware_adapi_v1_msgs/msg/ManualControlModeStatus
    • autoware_adapi_v1_msgs/msg/ManualOperatorStatus
    • autoware_adapi_v1_msgs/msg/MotionState
    • autoware_adapi_v1_msgs/msg/MrmState
    • autoware_adapi_v1_msgs/msg/ObjectClassification
    • autoware_adapi_v1_msgs/msg/OperationModeState
    • autoware_adapi_v1_msgs/msg/PedalCommand
    • autoware_adapi_v1_msgs/msg/ResponseStatus
    • autoware_adapi_v1_msgs/msg/Route
    • autoware_adapi_v1_msgs/msg/RouteData
    • autoware_adapi_v1_msgs/msg/RouteOption
    • autoware_adapi_v1_msgs/msg/RoutePrimitive
    • autoware_adapi_v1_msgs/msg/RouteSegment
    • autoware_adapi_v1_msgs/msg/RouteState
    • autoware_adapi_v1_msgs/msg/SteeringCommand
    • autoware_adapi_v1_msgs/msg/SteeringFactor
    • autoware_adapi_v1_msgs/msg/SteeringFactorArray
    • autoware_adapi_v1_msgs/msg/TurnIndicators
    • autoware_adapi_v1_msgs/msg/TurnIndicatorsCommand
    • autoware_adapi_v1_msgs/msg/VehicleDimensions
    • autoware_adapi_v1_msgs/msg/VehicleKinematics
    • autoware_adapi_v1_msgs/msg/VehicleStatus
    • autoware_adapi_v1_msgs/msg/VelocityFactor
    • autoware_adapi_v1_msgs/msg/VelocityFactorArray
    • autoware_adapi_v1_msgs/srv/AcceptStart
    • autoware_adapi_v1_msgs/srv/ChangeOperationMode
    • autoware_adapi_v1_msgs/srv/ClearRoute
    • autoware_adapi_v1_msgs/srv/GetCooperationPolicies
    • autoware_adapi_v1_msgs/srv/GetDoorLayout
    • autoware_adapi_v1_msgs/srv/GetVehicleDimensions
    • autoware_adapi_v1_msgs/srv/InitializeLocalization
    • autoware_adapi_v1_msgs/srv/ListManualControlMode
    • autoware_adapi_v1_msgs/srv/SelectManualControlMode
    • autoware_adapi_v1_msgs/srv/SetCooperationCommands
    • autoware_adapi_v1_msgs/srv/SetCooperationPolicies
    • autoware_adapi_v1_msgs/srv/SetDoorCommand
    • autoware_adapi_v1_msgs/srv/SetRoute
    • autoware_adapi_v1_msgs/srv/SetRoutePoints
    • autoware_adapi_version_msgs/srv/InterfaceVersion
    "},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/AccelerationCommand/","title":"autoware_adapi_v1_msgs/msg/AccelerationCommand","text":""},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/AccelerationCommand/#autoware_adapi_v1_msgsmsgaccelerationcommand","title":"autoware_adapi_v1_msgs/msg/AccelerationCommand","text":""},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/AccelerationCommand/#definition","title":"Definition","text":"
    builtin_interfaces/Time stamp\nfloat32 acceleration\n
    "},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/AccelerationCommand/#this-type-uses","title":"This type uses","text":"
    • None
    "},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/AccelerationCommand/#this-type-is-used-by","title":"This type is used by","text":"
    • None
    "},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/CooperationCommand/","title":"autoware_adapi_v1_msgs/msg/CooperationCommand","text":""},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/CooperationCommand/#autoware_adapi_v1_msgsmsgcooperationcommand","title":"autoware_adapi_v1_msgs/msg/CooperationCommand","text":""},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/CooperationCommand/#definition","title":"Definition","text":"
    unique_identifier_msgs/UUID uuid\nautoware_adapi_v1_msgs/CooperationDecision cooperator\n
    "},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/CooperationCommand/#this-type-uses","title":"This type uses","text":"
    • autoware_adapi_v1_msgs/msg/CooperationDecision
    "},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/CooperationCommand/#this-type-is-used-by","title":"This type is used by","text":"
    • autoware_adapi_v1_msgs/srv/SetCooperationCommands
    "},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/CooperationDecision/","title":"autoware_adapi_v1_msgs/msg/CooperationDecision","text":""},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/CooperationDecision/#autoware_adapi_v1_msgsmsgcooperationdecision","title":"autoware_adapi_v1_msgs/msg/CooperationDecision","text":""},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/CooperationDecision/#definition","title":"Definition","text":"
    uint8 UNKNOWN = 0\nuint8 DEACTIVATE = 1\nuint8 ACTIVATE = 2\nuint8 AUTONOMOUS = 3\nuint8 UNDECIDED = 4\n\nuint8 decision\n
    "},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/CooperationDecision/#this-type-uses","title":"This type uses","text":"
    • None
    "},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/CooperationDecision/#this-type-is-used-by","title":"This type is used by","text":"
    • autoware_adapi_v1_msgs/msg/CooperationCommand
    • autoware_adapi_v1_msgs/msg/CooperationStatus
    "},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/CooperationPolicy/","title":"autoware_adapi_v1_msgs/msg/CooperationPolicy","text":""},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/CooperationPolicy/#autoware_adapi_v1_msgsmsgcooperationpolicy","title":"autoware_adapi_v1_msgs/msg/CooperationPolicy","text":""},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/CooperationPolicy/#definition","title":"Definition","text":"
    uint8 OPTIONAL = 1\nuint8 REQUIRED = 2\n\nstring behavior\nstring sequence\nuint8 policy\n
    "},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/CooperationPolicy/#this-type-uses","title":"This type uses","text":"
    • None
    "},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/CooperationPolicy/#this-type-is-used-by","title":"This type is used by","text":"
    • autoware_adapi_v1_msgs/srv/GetCooperationPolicies
    • autoware_adapi_v1_msgs/srv/SetCooperationPolicies
    "},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/CooperationStatus/","title":"autoware_adapi_v1_msgs/msg/CooperationStatus","text":""},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/CooperationStatus/#autoware_adapi_v1_msgsmsgcooperationstatus","title":"autoware_adapi_v1_msgs/msg/CooperationStatus","text":""},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/CooperationStatus/#definition","title":"Definition","text":"
    unique_identifier_msgs/UUID uuid\nautoware_adapi_v1_msgs/CooperationDecision autonomous\nautoware_adapi_v1_msgs/CooperationDecision cooperator\nbool cancellable\n
    "},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/CooperationStatus/#this-type-uses","title":"This type uses","text":"
    • autoware_adapi_v1_msgs/msg/CooperationDecision
    "},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/CooperationStatus/#this-type-is-used-by","title":"This type is used by","text":"
    • autoware_adapi_v1_msgs/msg/SteeringFactor
    • autoware_adapi_v1_msgs/msg/VelocityFactor
    "},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/DiagGraphStatus/","title":"autoware_adapi_v1_msgs/msg/DiagGraphStatus","text":""},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/DiagGraphStatus/#autoware_adapi_v1_msgsmsgdiaggraphstatus","title":"autoware_adapi_v1_msgs/msg/DiagGraphStatus","text":""},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/DiagGraphStatus/#definition","title":"Definition","text":"
    builtin_interfaces/Time stamp\nstring id\nautoware_adapi_v1_msgs/DiagNodeStatus[] nodes\n
    "},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/DiagGraphStatus/#this-type-uses","title":"This type uses","text":"
    • autoware_adapi_v1_msgs/msg/DiagNodeStatus
    "},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/DiagGraphStatus/#this-type-is-used-by","title":"This type is used by","text":"
    • None
    "},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/DiagGraphStruct/","title":"autoware_adapi_v1_msgs/msg/DiagGraphStruct","text":""},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/DiagGraphStruct/#autoware_adapi_v1_msgsmsgdiaggraphstruct","title":"autoware_adapi_v1_msgs/msg/DiagGraphStruct","text":""},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/DiagGraphStruct/#definition","title":"Definition","text":"
    builtin_interfaces/Time stamp\nstring id\nautoware_adapi_v1_msgs/DiagNodeStruct[] nodes\nautoware_adapi_v1_msgs/DiagLinkStruct[] links\n
    "},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/DiagGraphStruct/#this-type-uses","title":"This type uses","text":"
    • autoware_adapi_v1_msgs/msg/DiagLinkStruct
    • autoware_adapi_v1_msgs/msg/DiagNodeStruct
    "},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/DiagGraphStruct/#this-type-is-used-by","title":"This type is used by","text":"
    • None
    "},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/DiagLinkStruct/","title":"autoware_adapi_v1_msgs/msg/DiagLinkStruct","text":""},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/DiagLinkStruct/#autoware_adapi_v1_msgsmsgdiaglinkstruct","title":"autoware_adapi_v1_msgs/msg/DiagLinkStruct","text":""},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/DiagLinkStruct/#definition","title":"Definition","text":"
    # The index of nodes in the graph struct message.\nuint32 parent\nuint32 child\n
    "},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/DiagLinkStruct/#this-type-uses","title":"This type uses","text":"
    • None
    "},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/DiagLinkStruct/#this-type-is-used-by","title":"This type is used by","text":"
    • autoware_adapi_v1_msgs/msg/DiagGraphStruct
    "},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/DiagNodeStatus/","title":"autoware_adapi_v1_msgs/msg/DiagNodeStatus","text":""},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/DiagNodeStatus/#autoware_adapi_v1_msgsmsgdiagnodestatus","title":"autoware_adapi_v1_msgs/msg/DiagNodeStatus","text":""},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/DiagNodeStatus/#definition","title":"Definition","text":"
    # The level of diagnostic_msgs/msg/DiagnosticStatus.\nbyte level\n
    "},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/DiagNodeStatus/#this-type-uses","title":"This type uses","text":"
    • None
    "},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/DiagNodeStatus/#this-type-is-used-by","title":"This type is used by","text":"
    • autoware_adapi_v1_msgs/msg/DiagGraphStatus
    "},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/DiagNodeStruct/","title":"autoware_adapi_v1_msgs/msg/DiagNodeStruct","text":""},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/DiagNodeStruct/#autoware_adapi_v1_msgsmsgdiagnodestruct","title":"autoware_adapi_v1_msgs/msg/DiagNodeStruct","text":""},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/DiagNodeStruct/#definition","title":"Definition","text":"
    string path\n
    "},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/DiagNodeStruct/#this-type-uses","title":"This type uses","text":"
    • None
    "},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/DiagNodeStruct/#this-type-is-used-by","title":"This type is used by","text":"
    • autoware_adapi_v1_msgs/msg/DiagGraphStruct
    "},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/DoorCommand/","title":"autoware_adapi_v1_msgs/msg/DoorCommand","text":""},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/DoorCommand/#autoware_adapi_v1_msgsmsgdoorcommand","title":"autoware_adapi_v1_msgs/msg/DoorCommand","text":""},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/DoorCommand/#definition","title":"Definition","text":"
    uint8 OPEN = 1\nuint8 CLOSE = 2\n\nuint32 index\nuint8 command\n
    "},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/DoorCommand/#this-type-uses","title":"This type uses","text":"
    • None
    "},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/DoorCommand/#this-type-is-used-by","title":"This type is used by","text":"
    • autoware_adapi_v1_msgs/srv/SetDoorCommand
    "},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/DoorLayout/","title":"autoware_adapi_v1_msgs/msg/DoorLayout","text":""},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/DoorLayout/#autoware_adapi_v1_msgsmsgdoorlayout","title":"autoware_adapi_v1_msgs/msg/DoorLayout","text":""},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/DoorLayout/#definition","title":"Definition","text":"
    uint8 GET_ON = 1\nuint8 GET_OFF = 2\n\nuint8[] roles\nstring description\n
    "},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/DoorLayout/#this-type-uses","title":"This type uses","text":"
    • None
    "},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/DoorLayout/#this-type-is-used-by","title":"This type is used by","text":"
    • autoware_adapi_v1_msgs/srv/GetDoorLayout
    "},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/DoorStatus/","title":"autoware_adapi_v1_msgs/msg/DoorStatus","text":""},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/DoorStatus/#autoware_adapi_v1_msgsmsgdoorstatus","title":"autoware_adapi_v1_msgs/msg/DoorStatus","text":""},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/DoorStatus/#definition","title":"Definition","text":"
    uint8 UNKNOWN = 0\nuint8 NOT_AVAILABLE = 1\nuint8 OPENED = 2\nuint8 CLOSED = 3\nuint8 OPENING = 4\nuint8 CLOSING = 5\n\nuint8 status\n
    "},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/DoorStatus/#this-type-uses","title":"This type uses","text":"
    • None
    "},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/DoorStatus/#this-type-is-used-by","title":"This type is used by","text":"
    • autoware_adapi_v1_msgs/msg/DoorStatusArray
    "},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/DoorStatusArray/","title":"autoware_adapi_v1_msgs/msg/DoorStatusArray","text":""},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/DoorStatusArray/#autoware_adapi_v1_msgsmsgdoorstatusarray","title":"autoware_adapi_v1_msgs/msg/DoorStatusArray","text":""},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/DoorStatusArray/#definition","title":"Definition","text":"
    builtin_interfaces/Time stamp\nautoware_adapi_v1_msgs/DoorStatus[] doors\n
    "},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/DoorStatusArray/#this-type-uses","title":"This type uses","text":"
    • autoware_adapi_v1_msgs/msg/DoorStatus
    "},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/DoorStatusArray/#this-type-is-used-by","title":"This type is used by","text":"
    • None
    "},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/DynamicObject/","title":"autoware_adapi_v1_msgs/msg/DynamicObject","text":""},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/DynamicObject/#autoware_adapi_v1_msgsmsgdynamicobject","title":"autoware_adapi_v1_msgs/msg/DynamicObject","text":""},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/DynamicObject/#definition","title":"Definition","text":"
    unique_identifier_msgs/UUID id\nfloat64 existence_probability\nautoware_adapi_v1_msgs/ObjectClassification[] classification\nautoware_adapi_v1_msgs/DynamicObjectKinematics kinematics\nshape_msgs/SolidPrimitive shape\n
    "},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/DynamicObject/#this-type-uses","title":"This type uses","text":"
    • autoware_adapi_v1_msgs/msg/DynamicObjectKinematics
    • autoware_adapi_v1_msgs/msg/ObjectClassification
    "},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/DynamicObject/#this-type-is-used-by","title":"This type is used by","text":"
    • autoware_adapi_v1_msgs/msg/DynamicObjectArray
    "},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/DynamicObjectArray/","title":"autoware_adapi_v1_msgs/msg/DynamicObjectArray","text":""},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/DynamicObjectArray/#autoware_adapi_v1_msgsmsgdynamicobjectarray","title":"autoware_adapi_v1_msgs/msg/DynamicObjectArray","text":""},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/DynamicObjectArray/#definition","title":"Definition","text":"
    std_msgs/Header header\nautoware_adapi_v1_msgs/DynamicObject[] objects\n
    "},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/DynamicObjectArray/#this-type-uses","title":"This type uses","text":"
    • autoware_adapi_v1_msgs/msg/DynamicObject
    "},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/DynamicObjectArray/#this-type-is-used-by","title":"This type is used by","text":"
    • None
    "},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/DynamicObjectKinematics/","title":"autoware_adapi_v1_msgs/msg/DynamicObjectKinematics","text":""},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/DynamicObjectKinematics/#autoware_adapi_v1_msgsmsgdynamicobjectkinematics","title":"autoware_adapi_v1_msgs/msg/DynamicObjectKinematics","text":""},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/DynamicObjectKinematics/#definition","title":"Definition","text":"
    geometry_msgs/Pose pose\ngeometry_msgs/Twist twist\ngeometry_msgs/Accel accel\n\nautoware_adapi_v1_msgs/DynamicObjectPath[] predicted_paths\n
    "},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/DynamicObjectKinematics/#this-type-uses","title":"This type uses","text":"
    • autoware_adapi_v1_msgs/msg/DynamicObjectPath
    "},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/DynamicObjectKinematics/#this-type-is-used-by","title":"This type is used by","text":"
    • autoware_adapi_v1_msgs/msg/DynamicObject
    "},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/DynamicObjectPath/","title":"autoware_adapi_v1_msgs/msg/DynamicObjectPath","text":""},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/DynamicObjectPath/#autoware_adapi_v1_msgsmsgdynamicobjectpath","title":"autoware_adapi_v1_msgs/msg/DynamicObjectPath","text":""},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/DynamicObjectPath/#definition","title":"Definition","text":"
    geometry_msgs/Pose[] path\nbuiltin_interfaces/Duration time_step\nfloat64 confidence\n
    "},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/DynamicObjectPath/#this-type-uses","title":"This type uses","text":"
    • None
    "},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/DynamicObjectPath/#this-type-is-used-by","title":"This type is used by","text":"
    • autoware_adapi_v1_msgs/msg/DynamicObjectKinematics
    "},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/Gear/","title":"autoware_adapi_v1_msgs/msg/Gear","text":""},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/Gear/#autoware_adapi_v1_msgsmsggear","title":"autoware_adapi_v1_msgs/msg/Gear","text":""},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/Gear/#definition","title":"Definition","text":"
    # constants\nuint8 UNKNOWN = 0\nuint8 NEUTRAL = 1\nuint8 DRIVE = 2\nuint8 REVERSE = 3\nuint8 PARK = 4\nuint8 LOW = 5\n\nuint8 status\n
    "},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/Gear/#this-type-uses","title":"This type uses","text":"
    • None
    "},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/Gear/#this-type-is-used-by","title":"This type is used by","text":"
    • autoware_adapi_v1_msgs/msg/GearCommand
    • autoware_adapi_v1_msgs/msg/VehicleStatus
    "},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/GearCommand/","title":"autoware_adapi_v1_msgs/msg/GearCommand","text":""},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/GearCommand/#autoware_adapi_v1_msgsmsggearcommand","title":"autoware_adapi_v1_msgs/msg/GearCommand","text":""},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/GearCommand/#definition","title":"Definition","text":"
    builtin_interfaces/Time stamp\nautoware_adapi_v1_msgs/Gear command\n
    "},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/GearCommand/#this-type-uses","title":"This type uses","text":"
    • autoware_adapi_v1_msgs/msg/Gear
    "},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/GearCommand/#this-type-is-used-by","title":"This type is used by","text":"
    • None
    "},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/HazardLights/","title":"autoware_adapi_v1_msgs/msg/HazardLights","text":""},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/HazardLights/#autoware_adapi_v1_msgsmsghazardlights","title":"autoware_adapi_v1_msgs/msg/HazardLights","text":""},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/HazardLights/#definition","title":"Definition","text":"
    # constants\nuint8 UNKNOWN = 0\nuint8 DISABLE = 1\nuint8 ENABLE = 2\n\nuint8 status\n
    "},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/HazardLights/#this-type-uses","title":"This type uses","text":"
    • None
    "},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/HazardLights/#this-type-is-used-by","title":"This type is used by","text":"
    • autoware_adapi_v1_msgs/msg/HazardLightsCommand
    • autoware_adapi_v1_msgs/msg/VehicleStatus
    "},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/HazardLightsCommand/","title":"autoware_adapi_v1_msgs/msg/HazardLightsCommand","text":""},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/HazardLightsCommand/#autoware_adapi_v1_msgsmsghazardlightscommand","title":"autoware_adapi_v1_msgs/msg/HazardLightsCommand","text":""},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/HazardLightsCommand/#definition","title":"Definition","text":"
    builtin_interfaces/Time stamp\nautoware_adapi_v1_msgs/HazardLights command\n
    "},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/HazardLightsCommand/#this-type-uses","title":"This type uses","text":"
    • autoware_adapi_v1_msgs/msg/HazardLights
    "},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/HazardLightsCommand/#this-type-is-used-by","title":"This type is used by","text":"
    • None
    "},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/Heartbeat/","title":"autoware_adapi_v1_msgs/msg/Heartbeat","text":""},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/Heartbeat/#autoware_adapi_v1_msgsmsgheartbeat","title":"autoware_adapi_v1_msgs/msg/Heartbeat","text":""},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/Heartbeat/#definition","title":"Definition","text":"
    # Timestamp in Autoware for delay checking.\nbuiltin_interfaces/Time stamp\n\n# Sequence number for order verification, wraps at 65535.\nuint16 seq\n
    "},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/Heartbeat/#this-type-uses","title":"This type uses","text":"
    • None
    "},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/Heartbeat/#this-type-is-used-by","title":"This type is used by","text":"
    • None
    "},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/LocalizationInitializationState/","title":"autoware_adapi_v1_msgs/msg/LocalizationInitializationState","text":""},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/LocalizationInitializationState/#autoware_adapi_v1_msgsmsglocalizationinitializationstate","title":"autoware_adapi_v1_msgs/msg/LocalizationInitializationState","text":""},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/LocalizationInitializationState/#definition","title":"Definition","text":"
    uint16 UNKNOWN = 0\nuint16 UNINITIALIZED = 1\nuint16 INITIALIZING = 2\nuint16 INITIALIZED = 3\n\nbuiltin_interfaces/Time stamp\nuint16 state\n
    "},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/LocalizationInitializationState/#this-type-uses","title":"This type uses","text":"
    • None
    "},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/LocalizationInitializationState/#this-type-is-used-by","title":"This type is used by","text":"
    • None
    "},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/ManualControlMode/","title":"autoware_adapi_v1_msgs/msg/ManualControlMode","text":""},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/ManualControlMode/#autoware_adapi_v1_msgsmsgmanualcontrolmode","title":"autoware_adapi_v1_msgs/msg/ManualControlMode","text":""},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/ManualControlMode/#definition","title":"Definition","text":"
    uint8 DISABLED = 1\nuint8 PEDAL = 2\nuint8 ACCELERATION = 3\n\nuint8 mode\n
    "},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/ManualControlMode/#this-type-uses","title":"This type uses","text":"
    • None
    "},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/ManualControlMode/#this-type-is-used-by","title":"This type is used by","text":"
    • autoware_adapi_v1_msgs/msg/ManualControlModeStatus
    • autoware_adapi_v1_msgs/srv/ListManualControlMode
    • autoware_adapi_v1_msgs/srv/SelectManualControlMode
    "},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/ManualControlModeStatus/","title":"autoware_adapi_v1_msgs/msg/ManualControlModeStatus","text":""},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/ManualControlModeStatus/#autoware_adapi_v1_msgsmsgmanualcontrolmodestatus","title":"autoware_adapi_v1_msgs/msg/ManualControlModeStatus","text":""},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/ManualControlModeStatus/#definition","title":"Definition","text":"
    builtin_interfaces/Time stamp\nautoware_adapi_v1_msgs/ManualControlMode mode\n
    "},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/ManualControlModeStatus/#this-type-uses","title":"This type uses","text":"
    • autoware_adapi_v1_msgs/msg/ManualControlMode
    "},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/ManualControlModeStatus/#this-type-is-used-by","title":"This type is used by","text":"
    • None
    "},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/ManualOperatorStatus/","title":"autoware_adapi_v1_msgs/msg/ManualOperatorStatus","text":""},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/ManualOperatorStatus/#autoware_adapi_v1_msgsmsgmanualoperatorstatus","title":"autoware_adapi_v1_msgs/msg/ManualOperatorStatus","text":""},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/ManualOperatorStatus/#definition","title":"Definition","text":"
    builtin_interfaces/Time stamp\nbool ready\n
    "},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/ManualOperatorStatus/#this-type-uses","title":"This type uses","text":"
    • None
    "},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/ManualOperatorStatus/#this-type-is-used-by","title":"This type is used by","text":"
    • None
    "},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/MotionState/","title":"autoware_adapi_v1_msgs/msg/MotionState","text":""},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/MotionState/#autoware_adapi_v1_msgsmsgmotionstate","title":"autoware_adapi_v1_msgs/msg/MotionState","text":""},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/MotionState/#definition","title":"Definition","text":"
    uint16 UNKNOWN = 0\nuint16 STOPPED = 1\nuint16 STARTING = 2\nuint16 MOVING = 3\n\nbuiltin_interfaces/Time stamp\nuint16 state\n
    "},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/MotionState/#this-type-uses","title":"This type uses","text":"
    • None
    "},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/MotionState/#this-type-is-used-by","title":"This type is used by","text":"
    • None
    "},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/MrmState/","title":"autoware_adapi_v1_msgs/msg/MrmState","text":""},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/MrmState/#autoware_adapi_v1_msgsmsgmrmstate","title":"autoware_adapi_v1_msgs/msg/MrmState","text":""},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/MrmState/#definition","title":"Definition","text":"
    builtin_interfaces/Time stamp\n\n# For common use\nuint16 UNKNOWN = 0\n\n# For state\nuint16 NORMAL = 1\nuint16 MRM_OPERATING = 2\nuint16 MRM_SUCCEEDED = 3\nuint16 MRM_FAILED = 4\n\n# For behavior\nuint16 NONE = 1\nuint16 EMERGENCY_STOP = 2\nuint16 COMFORTABLE_STOP = 3\nuint16 PULL_OVER = 4\n\nuint16 state\nuint16 behavior\n
    "},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/MrmState/#this-type-uses","title":"This type uses","text":"
    • None
    "},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/MrmState/#this-type-is-used-by","title":"This type is used by","text":"
    • None
    "},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/ObjectClassification/","title":"autoware_adapi_v1_msgs/msg/ObjectClassification","text":""},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/ObjectClassification/#autoware_adapi_v1_msgsmsgobjectclassification","title":"autoware_adapi_v1_msgs/msg/ObjectClassification","text":""},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/ObjectClassification/#definition","title":"Definition","text":"
    uint8 UNKNOWN=0\nuint8 CAR=1\nuint8 TRUCK=2\nuint8 BUS=3\nuint8 TRAILER = 4\nuint8 MOTORCYCLE = 5\nuint8 BICYCLE = 6\nuint8 PEDESTRIAN = 7\n\nuint8 label\nfloat64 probability\n
    "},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/ObjectClassification/#this-type-uses","title":"This type uses","text":"
    • None
    "},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/ObjectClassification/#this-type-is-used-by","title":"This type is used by","text":"
    • autoware_adapi_v1_msgs/msg/DynamicObject
    "},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/OperationModeState/","title":"autoware_adapi_v1_msgs/msg/OperationModeState","text":""},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/OperationModeState/#autoware_adapi_v1_msgsmsgoperationmodestate","title":"autoware_adapi_v1_msgs/msg/OperationModeState","text":""},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/OperationModeState/#definition","title":"Definition","text":"
    # constants for mode\nuint8 UNKNOWN = 0\nuint8 STOP = 1\nuint8 AUTONOMOUS = 2\nuint8 LOCAL = 3\nuint8 REMOTE = 4\n\n# variables\nbuiltin_interfaces/Time stamp\nuint8 mode\nbool is_autoware_control_enabled\nbool is_in_transition\nbool is_stop_mode_available\nbool is_autonomous_mode_available\nbool is_local_mode_available\nbool is_remote_mode_available\n
    "},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/OperationModeState/#this-type-uses","title":"This type uses","text":"
    • None
    "},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/OperationModeState/#this-type-is-used-by","title":"This type is used by","text":"
    • None
    "},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/PedalCommand/","title":"autoware_adapi_v1_msgs/msg/PedalCommand","text":""},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/PedalCommand/#autoware_adapi_v1_msgsmsgpedalcommand","title":"autoware_adapi_v1_msgs/msg/PedalCommand","text":""},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/PedalCommand/#definition","title":"Definition","text":"
    builtin_interfaces/Time stamp\nfloat32 accelerator\nfloat32 brake\n
    "},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/PedalCommand/#this-type-uses","title":"This type uses","text":"
    • None
    "},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/PedalCommand/#this-type-is-used-by","title":"This type is used by","text":"
    • None
    "},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/ResponseStatus/","title":"autoware_adapi_v1_msgs/msg/ResponseStatus","text":""},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/ResponseStatus/#autoware_adapi_v1_msgsmsgresponsestatus","title":"autoware_adapi_v1_msgs/msg/ResponseStatus","text":""},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/ResponseStatus/#definition","title":"Definition","text":"
    # error code\nuint16 UNKNOWN = 50000\nuint16 SERVICE_UNREADY = 50001\nuint16 SERVICE_TIMEOUT = 50002\nuint16 TRANSFORM_ERROR = 50003\nuint16 PARAMETER_ERROR = 50004\n\n# warning code\nuint16 DEPRECATED = 60000\nuint16 NO_EFFECT = 60001\n\n# variables\nbool   success\nuint16 code\nstring message\n
    "},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/ResponseStatus/#this-type-uses","title":"This type uses","text":"
    • None
    "},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/ResponseStatus/#this-type-is-used-by","title":"This type is used by","text":"
    • autoware_adapi_v1_msgs/srv/AcceptStart
    • autoware_adapi_v1_msgs/srv/ChangeOperationMode
    • autoware_adapi_v1_msgs/srv/ClearRoute
    • autoware_adapi_v1_msgs/srv/GetCooperationPolicies
    • autoware_adapi_v1_msgs/srv/GetDoorLayout
    • autoware_adapi_v1_msgs/srv/GetVehicleDimensions
    • autoware_adapi_v1_msgs/srv/InitializeLocalization
    • autoware_adapi_v1_msgs/srv/ListManualControlMode
    • autoware_adapi_v1_msgs/srv/SelectManualControlMode
    • autoware_adapi_v1_msgs/srv/SetCooperationCommands
    • autoware_adapi_v1_msgs/srv/SetCooperationPolicies
    • autoware_adapi_v1_msgs/srv/SetDoorCommand
    • autoware_adapi_v1_msgs/srv/SetRoute
    • autoware_adapi_v1_msgs/srv/SetRoutePoints
    "},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/Route/","title":"autoware_adapi_v1_msgs/msg/Route","text":""},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/Route/#autoware_adapi_v1_msgsmsgroute","title":"autoware_adapi_v1_msgs/msg/Route","text":""},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/Route/#definition","title":"Definition","text":"
    std_msgs/Header header\nautoware_adapi_v1_msgs/RouteData[<=1] data\n
    "},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/Route/#this-type-uses","title":"This type uses","text":"
    • autoware_adapi_v1_msgs/msg/RouteData
    "},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/Route/#this-type-is-used-by","title":"This type is used by","text":"
    • None
    "},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/RouteData/","title":"autoware_adapi_v1_msgs/msg/RouteData","text":""},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/RouteData/#autoware_adapi_v1_msgsmsgroutedata","title":"autoware_adapi_v1_msgs/msg/RouteData","text":""},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/RouteData/#definition","title":"Definition","text":"
    geometry_msgs/Pose start\ngeometry_msgs/Pose goal\nautoware_adapi_v1_msgs/RouteSegment[] segments\n
    "},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/RouteData/#this-type-uses","title":"This type uses","text":"
    • autoware_adapi_v1_msgs/msg/RouteSegment
    "},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/RouteData/#this-type-is-used-by","title":"This type is used by","text":"
    • autoware_adapi_v1_msgs/msg/Route
    "},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/RouteOption/","title":"autoware_adapi_v1_msgs/msg/RouteOption","text":""},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/RouteOption/#autoware_adapi_v1_msgsmsgrouteoption","title":"autoware_adapi_v1_msgs/msg/RouteOption","text":""},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/RouteOption/#definition","title":"Definition","text":"
    # Please refer to the following pages for details on each option.\n# https://autowarefoundation.github.io/autoware-documentation/main/design/autoware-interfaces/ad-api/features/routing/\n\nbool allow_goal_modification\nbool allow_while_using_route\n
    "},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/RouteOption/#this-type-uses","title":"This type uses","text":"
    • None
    "},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/RouteOption/#this-type-is-used-by","title":"This type is used by","text":"
    • autoware_adapi_v1_msgs/srv/SetRoute
    • autoware_adapi_v1_msgs/srv/SetRoutePoints
    "},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/RoutePrimitive/","title":"autoware_adapi_v1_msgs/msg/RoutePrimitive","text":""},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/RoutePrimitive/#autoware_adapi_v1_msgsmsgrouteprimitive","title":"autoware_adapi_v1_msgs/msg/RoutePrimitive","text":""},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/RoutePrimitive/#definition","title":"Definition","text":"
    int64 id\nstring type  # The same id may be used for each type.\n
    "},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/RoutePrimitive/#this-type-uses","title":"This type uses","text":"
    • None
    "},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/RoutePrimitive/#this-type-is-used-by","title":"This type is used by","text":"
    • autoware_adapi_v1_msgs/msg/RouteSegment
    "},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/RouteSegment/","title":"autoware_adapi_v1_msgs/msg/RouteSegment","text":""},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/RouteSegment/#autoware_adapi_v1_msgsmsgroutesegment","title":"autoware_adapi_v1_msgs/msg/RouteSegment","text":""},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/RouteSegment/#definition","title":"Definition","text":"
    autoware_adapi_v1_msgs/RoutePrimitive   preferred\nautoware_adapi_v1_msgs/RoutePrimitive[] alternatives  # Does not include the preferred primitive.\n
    "},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/RouteSegment/#this-type-uses","title":"This type uses","text":"
    • autoware_adapi_v1_msgs/msg/RoutePrimitive
    "},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/RouteSegment/#this-type-is-used-by","title":"This type is used by","text":"
    • autoware_adapi_v1_msgs/msg/RouteData
    • autoware_adapi_v1_msgs/srv/SetRoute
    "},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/RouteState/","title":"autoware_adapi_v1_msgs/msg/RouteState","text":""},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/RouteState/#autoware_adapi_v1_msgsmsgroutestate","title":"autoware_adapi_v1_msgs/msg/RouteState","text":""},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/RouteState/#definition","title":"Definition","text":"
    uint16 UNKNOWN = 0\nuint16 UNSET = 1\nuint16 SET = 2\nuint16 ARRIVED = 3\nuint16 CHANGING = 4\n\nbuiltin_interfaces/Time stamp\nuint16 state\n
    "},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/RouteState/#this-type-uses","title":"This type uses","text":"
    • None
    "},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/RouteState/#this-type-is-used-by","title":"This type is used by","text":"
    • None
    "},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/SteeringCommand/","title":"autoware_adapi_v1_msgs/msg/SteeringCommand","text":""},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/SteeringCommand/#autoware_adapi_v1_msgsmsgsteeringcommand","title":"autoware_adapi_v1_msgs/msg/SteeringCommand","text":""},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/SteeringCommand/#definition","title":"Definition","text":"
    builtin_interfaces/Time stamp\nfloat32 steering_tire_angle\n
    "},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/SteeringCommand/#this-type-uses","title":"This type uses","text":"
    • None
    "},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/SteeringCommand/#this-type-is-used-by","title":"This type is used by","text":"
    • None
    "},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/SteeringFactor/","title":"autoware_adapi_v1_msgs/msg/SteeringFactor","text":""},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/SteeringFactor/#autoware_adapi_v1_msgsmsgsteeringfactor","title":"autoware_adapi_v1_msgs/msg/SteeringFactor","text":""},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/SteeringFactor/#definition","title":"Definition","text":"
    # constants for common use\nuint16 UNKNOWN = 0\n\n# constants for direction\nuint16 LEFT = 1\nuint16 RIGHT = 2\nuint16 STRAIGHT = 3\n\n# constants for status\nuint16 APPROACHING = 1\nuint16 TURNING = 3\n\n# variables\ngeometry_msgs/Pose[2] pose\nfloat32[2] distance\nuint16 direction\nuint16 status\nstring behavior\nstring sequence\nstring detail\nautoware_adapi_v1_msgs/CooperationStatus[<=1] cooperation\n
    "},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/SteeringFactor/#this-type-uses","title":"This type uses","text":"
    • autoware_adapi_v1_msgs/msg/CooperationStatus
    "},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/SteeringFactor/#this-type-is-used-by","title":"This type is used by","text":"
    • autoware_adapi_v1_msgs/msg/SteeringFactorArray
    "},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/SteeringFactorArray/","title":"autoware_adapi_v1_msgs/msg/SteeringFactorArray","text":""},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/SteeringFactorArray/#autoware_adapi_v1_msgsmsgsteeringfactorarray","title":"autoware_adapi_v1_msgs/msg/SteeringFactorArray","text":""},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/SteeringFactorArray/#definition","title":"Definition","text":"
    std_msgs/Header header\nautoware_adapi_v1_msgs/SteeringFactor[] factors\n
    "},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/SteeringFactorArray/#this-type-uses","title":"This type uses","text":"
    • autoware_adapi_v1_msgs/msg/SteeringFactor
    "},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/SteeringFactorArray/#this-type-is-used-by","title":"This type is used by","text":"
    • None
    "},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/TurnIndicators/","title":"autoware_adapi_v1_msgs/msg/TurnIndicators","text":""},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/TurnIndicators/#autoware_adapi_v1_msgsmsgturnindicators","title":"autoware_adapi_v1_msgs/msg/TurnIndicators","text":""},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/TurnIndicators/#definition","title":"Definition","text":"
    # constants\nuint8 UNKNOWN = 0\nuint8 DISABLE = 1\nuint8 LEFT = 2\nuint8 RIGHT = 3\n\nuint8 status\n
    "},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/TurnIndicators/#this-type-uses","title":"This type uses","text":"
    • None
    "},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/TurnIndicators/#this-type-is-used-by","title":"This type is used by","text":"
    • autoware_adapi_v1_msgs/msg/TurnIndicatorsCommand
    • autoware_adapi_v1_msgs/msg/VehicleStatus
    "},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/TurnIndicatorsCommand/","title":"autoware_adapi_v1_msgs/msg/TurnIndicatorsCommand","text":""},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/TurnIndicatorsCommand/#autoware_adapi_v1_msgsmsgturnindicatorscommand","title":"autoware_adapi_v1_msgs/msg/TurnIndicatorsCommand","text":""},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/TurnIndicatorsCommand/#definition","title":"Definition","text":"
    builtin_interfaces/Time stamp\nautoware_adapi_v1_msgs/TurnIndicators command\n
    "},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/TurnIndicatorsCommand/#this-type-uses","title":"This type uses","text":"
    • autoware_adapi_v1_msgs/msg/TurnIndicators
    "},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/TurnIndicatorsCommand/#this-type-is-used-by","title":"This type is used by","text":"
    • None
    "},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/VehicleDimensions/","title":"autoware_adapi_v1_msgs/msg/VehicleDimensions","text":""},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/VehicleDimensions/#autoware_adapi_v1_msgsmsgvehicledimensions","title":"autoware_adapi_v1_msgs/msg/VehicleDimensions","text":""},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/VehicleDimensions/#definition","title":"Definition","text":"
    float32 wheel_radius\nfloat32 wheel_width\nfloat32 wheel_base\nfloat32 wheel_tread\nfloat32 front_overhang\nfloat32 rear_overhang\nfloat32 left_overhang\nfloat32 right_overhang\nfloat32 height\ngeometry_msgs/Polygon footprint\n
    "},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/VehicleDimensions/#this-type-uses","title":"This type uses","text":"
    • None
    "},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/VehicleDimensions/#this-type-is-used-by","title":"This type is used by","text":"
    • autoware_adapi_v1_msgs/srv/GetVehicleDimensions
    "},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/VehicleKinematics/","title":"autoware_adapi_v1_msgs/msg/VehicleKinematics","text":""},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/VehicleKinematics/#autoware_adapi_v1_msgsmsgvehiclekinematics","title":"autoware_adapi_v1_msgs/msg/VehicleKinematics","text":""},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/VehicleKinematics/#definition","title":"Definition","text":"
    # Geographic point, using the WGS 84 reference ellipsoid.\n# This data will be invalid If Autoware does not provide projection information between geographic coordinates and local coordinates.\ngeographic_msgs/GeoPointStamped geographic_pose\n\n# Local coordinate from the autoware\ngeometry_msgs/PoseWithCovarianceStamped pose\ngeometry_msgs/TwistWithCovarianceStamped twist\ngeometry_msgs/AccelWithCovarianceStamped accel\n
    "},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/VehicleKinematics/#this-type-uses","title":"This type uses","text":"
    • None
    "},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/VehicleKinematics/#this-type-is-used-by","title":"This type is used by","text":"
    • None
    "},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/VehicleStatus/","title":"autoware_adapi_v1_msgs/msg/VehicleStatus","text":""},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/VehicleStatus/#autoware_adapi_v1_msgsmsgvehiclestatus","title":"autoware_adapi_v1_msgs/msg/VehicleStatus","text":""},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/VehicleStatus/#definition","title":"Definition","text":"
    builtin_interfaces/Time stamp\nautoware_adapi_v1_msgs/Gear gear\nautoware_adapi_v1_msgs/TurnIndicators turn_indicators\nautoware_adapi_v1_msgs/HazardLights hazard_lights\nfloat64 steering_tire_angle\n
    "},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/VehicleStatus/#this-type-uses","title":"This type uses","text":"
    • autoware_adapi_v1_msgs/msg/Gear
    • autoware_adapi_v1_msgs/msg/HazardLights
    • autoware_adapi_v1_msgs/msg/TurnIndicators
    "},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/VehicleStatus/#this-type-is-used-by","title":"This type is used by","text":"
    • None
    "},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/VelocityFactor/","title":"autoware_adapi_v1_msgs/msg/VelocityFactor","text":""},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/VelocityFactor/#autoware_adapi_v1_msgsmsgvelocityfactor","title":"autoware_adapi_v1_msgs/msg/VelocityFactor","text":""},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/VelocityFactor/#definition","title":"Definition","text":"
    # constants for common use\nuint16 UNKNOWN = 0\n\n# constants for status\nuint16 APPROACHING = 1\nuint16 STOPPED = 2\n\n# variables\ngeometry_msgs/Pose pose\nfloat32 distance\nuint16 status\nstring behavior\nstring sequence\nstring detail\nautoware_adapi_v1_msgs/CooperationStatus[<=1] cooperation\n
    "},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/VelocityFactor/#this-type-uses","title":"This type uses","text":"
    • autoware_adapi_v1_msgs/msg/CooperationStatus
    "},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/VelocityFactor/#this-type-is-used-by","title":"This type is used by","text":"
    • autoware_adapi_v1_msgs/msg/VelocityFactorArray
    "},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/VelocityFactorArray/","title":"autoware_adapi_v1_msgs/msg/VelocityFactorArray","text":""},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/VelocityFactorArray/#autoware_adapi_v1_msgsmsgvelocityfactorarray","title":"autoware_adapi_v1_msgs/msg/VelocityFactorArray","text":""},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/VelocityFactorArray/#definition","title":"Definition","text":"
    std_msgs/Header header\nautoware_adapi_v1_msgs/VelocityFactor[] factors\n
    "},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/VelocityFactorArray/#this-type-uses","title":"This type uses","text":"
    • autoware_adapi_v1_msgs/msg/VelocityFactor
    "},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/VelocityFactorArray/#this-type-is-used-by","title":"This type is used by","text":"
    • None
    "},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/srv/AcceptStart/","title":"autoware_adapi_v1_msgs/srv/AcceptStart","text":""},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/srv/AcceptStart/#autoware_adapi_v1_msgssrvacceptstart","title":"autoware_adapi_v1_msgs/srv/AcceptStart","text":""},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/srv/AcceptStart/#definition","title":"Definition","text":"
    ---\nuint16 ERROR_NOT_STARTING = 1\nautoware_adapi_v1_msgs/ResponseStatus status\n
    "},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/srv/AcceptStart/#this-type-uses","title":"This type uses","text":"
    • autoware_adapi_v1_msgs/msg/ResponseStatus
    "},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/srv/AcceptStart/#this-type-is-used-by","title":"This type is used by","text":"
    • None
    "},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/srv/ChangeOperationMode/","title":"autoware_adapi_v1_msgs/srv/ChangeOperationMode","text":""},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/srv/ChangeOperationMode/#autoware_adapi_v1_msgssrvchangeoperationmode","title":"autoware_adapi_v1_msgs/srv/ChangeOperationMode","text":""},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/srv/ChangeOperationMode/#definition","title":"Definition","text":"
    ---\nuint16 ERROR_NOT_AVAILABLE = 1\nuint16 ERROR_IN_TRANSITION = 2\nautoware_adapi_v1_msgs/ResponseStatus status\n
    "},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/srv/ChangeOperationMode/#this-type-uses","title":"This type uses","text":"
    • autoware_adapi_v1_msgs/msg/ResponseStatus
    "},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/srv/ChangeOperationMode/#this-type-is-used-by","title":"This type is used by","text":"
    • None
    "},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/srv/ClearRoute/","title":"autoware_adapi_v1_msgs/srv/ClearRoute","text":""},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/srv/ClearRoute/#autoware_adapi_v1_msgssrvclearroute","title":"autoware_adapi_v1_msgs/srv/ClearRoute","text":""},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/srv/ClearRoute/#definition","title":"Definition","text":"
    ---\nautoware_adapi_v1_msgs/ResponseStatus status\n
    "},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/srv/ClearRoute/#this-type-uses","title":"This type uses","text":"
    • autoware_adapi_v1_msgs/msg/ResponseStatus
    "},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/srv/ClearRoute/#this-type-is-used-by","title":"This type is used by","text":"
    • None
    "},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/srv/GetCooperationPolicies/","title":"autoware_adapi_v1_msgs/srv/GetCooperationPolicies","text":""},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/srv/GetCooperationPolicies/#autoware_adapi_v1_msgssrvgetcooperationpolicies","title":"autoware_adapi_v1_msgs/srv/GetCooperationPolicies","text":""},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/srv/GetCooperationPolicies/#definition","title":"Definition","text":"
    ---\nautoware_adapi_v1_msgs/ResponseStatus status\nautoware_adapi_v1_msgs/CooperationPolicy[] policies\n
    "},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/srv/GetCooperationPolicies/#this-type-uses","title":"This type uses","text":"
    • autoware_adapi_v1_msgs/msg/CooperationPolicy
    • autoware_adapi_v1_msgs/msg/ResponseStatus
    "},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/srv/GetCooperationPolicies/#this-type-is-used-by","title":"This type is used by","text":"
    • None
    "},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/srv/GetDoorLayout/","title":"autoware_adapi_v1_msgs/srv/GetDoorLayout","text":""},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/srv/GetDoorLayout/#autoware_adapi_v1_msgssrvgetdoorlayout","title":"autoware_adapi_v1_msgs/srv/GetDoorLayout","text":""},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/srv/GetDoorLayout/#definition","title":"Definition","text":"
    ---\nautoware_adapi_v1_msgs/ResponseStatus status\nautoware_adapi_v1_msgs/DoorLayout[] doors\n
    "},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/srv/GetDoorLayout/#this-type-uses","title":"This type uses","text":"
    • autoware_adapi_v1_msgs/msg/DoorLayout
    • autoware_adapi_v1_msgs/msg/ResponseStatus
    "},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/srv/GetDoorLayout/#this-type-is-used-by","title":"This type is used by","text":"
    • None
    "},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/srv/GetVehicleDimensions/","title":"autoware_adapi_v1_msgs/srv/GetVehicleDimensions","text":""},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/srv/GetVehicleDimensions/#autoware_adapi_v1_msgssrvgetvehicledimensions","title":"autoware_adapi_v1_msgs/srv/GetVehicleDimensions","text":""},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/srv/GetVehicleDimensions/#definition","title":"Definition","text":"
    ---\nautoware_adapi_v1_msgs/ResponseStatus status\nautoware_adapi_v1_msgs/VehicleDimensions dimensions\n
    "},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/srv/GetVehicleDimensions/#this-type-uses","title":"This type uses","text":"
    • autoware_adapi_v1_msgs/msg/ResponseStatus
    • autoware_adapi_v1_msgs/msg/VehicleDimensions
    "},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/srv/GetVehicleDimensions/#this-type-is-used-by","title":"This type is used by","text":"
    • None
    "},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/srv/InitializeLocalization/","title":"autoware_adapi_v1_msgs/srv/InitializeLocalization","text":""},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/srv/InitializeLocalization/#autoware_adapi_v1_msgssrvinitializelocalization","title":"autoware_adapi_v1_msgs/srv/InitializeLocalization","text":""},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/srv/InitializeLocalization/#definition","title":"Definition","text":"
    geometry_msgs/PoseWithCovarianceStamped[<=1] pose\n---\nuint16 ERROR_UNSAFE = 1\nuint16 ERROR_GNSS_SUPPORT = 2\nuint16 ERROR_GNSS = 3\nuint16 ERROR_ESTIMATION = 4\nautoware_adapi_v1_msgs/ResponseStatus status\n
    "},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/srv/InitializeLocalization/#this-type-uses","title":"This type uses","text":"
    • autoware_adapi_v1_msgs/msg/ResponseStatus
    "},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/srv/InitializeLocalization/#this-type-is-used-by","title":"This type is used by","text":"
    • None
    "},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/srv/ListManualControlMode/","title":"autoware_adapi_v1_msgs/srv/ListManualControlMode","text":""},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/srv/ListManualControlMode/#autoware_adapi_v1_msgssrvlistmanualcontrolmode","title":"autoware_adapi_v1_msgs/srv/ListManualControlMode","text":""},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/srv/ListManualControlMode/#definition","title":"Definition","text":"
    ---\nautoware_adapi_v1_msgs/ResponseStatus status\nautoware_adapi_v1_msgs/ManualControlMode[] modes\n
    "},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/srv/ListManualControlMode/#this-type-uses","title":"This type uses","text":"
    • autoware_adapi_v1_msgs/msg/ManualControlMode
    • autoware_adapi_v1_msgs/msg/ResponseStatus
    "},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/srv/ListManualControlMode/#this-type-is-used-by","title":"This type is used by","text":"
    • None
    "},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/srv/SelectManualControlMode/","title":"autoware_adapi_v1_msgs/srv/SelectManualControlMode","text":""},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/srv/SelectManualControlMode/#autoware_adapi_v1_msgssrvselectmanualcontrolmode","title":"autoware_adapi_v1_msgs/srv/SelectManualControlMode","text":""},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/srv/SelectManualControlMode/#definition","title":"Definition","text":"
    autoware_adapi_v1_msgs/ManualControlMode mode\n---\nautoware_adapi_v1_msgs/ResponseStatus status\n
    "},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/srv/SelectManualControlMode/#this-type-uses","title":"This type uses","text":"
    • autoware_adapi_v1_msgs/msg/ManualControlMode
    • autoware_adapi_v1_msgs/msg/ResponseStatus
    "},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/srv/SelectManualControlMode/#this-type-is-used-by","title":"This type is used by","text":"
    • None
    "},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/srv/SetCooperationCommands/","title":"autoware_adapi_v1_msgs/srv/SetCooperationCommands","text":""},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/srv/SetCooperationCommands/#autoware_adapi_v1_msgssrvsetcooperationcommands","title":"autoware_adapi_v1_msgs/srv/SetCooperationCommands","text":""},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/srv/SetCooperationCommands/#definition","title":"Definition","text":"
    autoware_adapi_v1_msgs/CooperationCommand[] commands\n---\nautoware_adapi_v1_msgs/ResponseStatus status\n
    "},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/srv/SetCooperationCommands/#this-type-uses","title":"This type uses","text":"
    • autoware_adapi_v1_msgs/msg/CooperationCommand
    • autoware_adapi_v1_msgs/msg/ResponseStatus
    "},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/srv/SetCooperationCommands/#this-type-is-used-by","title":"This type is used by","text":"
    • None
    "},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/srv/SetCooperationPolicies/","title":"autoware_adapi_v1_msgs/srv/SetCooperationPolicies","text":""},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/srv/SetCooperationPolicies/#autoware_adapi_v1_msgssrvsetcooperationpolicies","title":"autoware_adapi_v1_msgs/srv/SetCooperationPolicies","text":""},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/srv/SetCooperationPolicies/#definition","title":"Definition","text":"
    autoware_adapi_v1_msgs/CooperationPolicy[] policies\n---\nautoware_adapi_v1_msgs/ResponseStatus status\n
    "},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/srv/SetCooperationPolicies/#this-type-uses","title":"This type uses","text":"
    • autoware_adapi_v1_msgs/msg/CooperationPolicy
    • autoware_adapi_v1_msgs/msg/ResponseStatus
    "},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/srv/SetCooperationPolicies/#this-type-is-used-by","title":"This type is used by","text":"
    • None
    "},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/srv/SetDoorCommand/","title":"autoware_adapi_v1_msgs/srv/SetDoorCommand","text":""},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/srv/SetDoorCommand/#autoware_adapi_v1_msgssrvsetdoorcommand","title":"autoware_adapi_v1_msgs/srv/SetDoorCommand","text":""},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/srv/SetDoorCommand/#definition","title":"Definition","text":"
    autoware_adapi_v1_msgs/DoorCommand[] doors\n---\nautoware_adapi_v1_msgs/ResponseStatus status\n
    "},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/srv/SetDoorCommand/#this-type-uses","title":"This type uses","text":"
    • autoware_adapi_v1_msgs/msg/DoorCommand
    • autoware_adapi_v1_msgs/msg/ResponseStatus
    "},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/srv/SetDoorCommand/#this-type-is-used-by","title":"This type is used by","text":"
    • None
    "},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/srv/SetRoute/","title":"autoware_adapi_v1_msgs/srv/SetRoute","text":""},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/srv/SetRoute/#autoware_adapi_v1_msgssrvsetroute","title":"autoware_adapi_v1_msgs/srv/SetRoute","text":""},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/srv/SetRoute/#definition","title":"Definition","text":"
    std_msgs/Header header\nautoware_adapi_v1_msgs/RouteOption option\ngeometry_msgs/Pose goal\nautoware_adapi_v1_msgs/RouteSegment[] segments\n---\nuint16 ERROR_ROUTE_EXISTS = 1 # Deprecated. Use ERROR_INVALID_STATE.\nuint16 ERROR_INVALID_STATE = 1\nuint16 ERROR_PLANNER_UNREADY = 2\nuint16 ERROR_PLANNER_FAILED = 3\nuint16 ERROR_REROUTE_FAILED = 4\nautoware_adapi_v1_msgs/ResponseStatus status\n
    "},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/srv/SetRoute/#this-type-uses","title":"This type uses","text":"
    • autoware_adapi_v1_msgs/msg/ResponseStatus
    • autoware_adapi_v1_msgs/msg/RouteOption
    • autoware_adapi_v1_msgs/msg/RouteSegment
    "},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/srv/SetRoute/#this-type-is-used-by","title":"This type is used by","text":"
    • None
    "},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/srv/SetRoutePoints/","title":"autoware_adapi_v1_msgs/srv/SetRoutePoints","text":""},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/srv/SetRoutePoints/#autoware_adapi_v1_msgssrvsetroutepoints","title":"autoware_adapi_v1_msgs/srv/SetRoutePoints","text":""},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/srv/SetRoutePoints/#definition","title":"Definition","text":"
    std_msgs/Header header\nautoware_adapi_v1_msgs/RouteOption option\ngeometry_msgs/Pose goal\ngeometry_msgs/Pose[] waypoints\n---\nuint16 ERROR_ROUTE_EXISTS = 1 # Deprecated. Use ERROR_INVALID_STATE.\nuint16 ERROR_INVALID_STATE = 1\nuint16 ERROR_PLANNER_UNREADY = 2\nuint16 ERROR_PLANNER_FAILED = 3\nuint16 ERROR_REROUTE_FAILED = 4\nautoware_adapi_v1_msgs/ResponseStatus status\n
    "},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/srv/SetRoutePoints/#this-type-uses","title":"This type uses","text":"
    • autoware_adapi_v1_msgs/msg/ResponseStatus
    • autoware_adapi_v1_msgs/msg/RouteOption
    "},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/srv/SetRoutePoints/#this-type-is-used-by","title":"This type is used by","text":"
    • None
    "},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_version_msgs/srv/InterfaceVersion/","title":"autoware_adapi_version_msgs/srv/InterfaceVersion","text":""},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_version_msgs/srv/InterfaceVersion/#autoware_adapi_version_msgssrvinterfaceversion","title":"autoware_adapi_version_msgs/srv/InterfaceVersion","text":""},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_version_msgs/srv/InterfaceVersion/#definition","title":"Definition","text":"
    ---\nuint16 major\nuint16 minor\nuint16 patch\n
    "},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_version_msgs/srv/InterfaceVersion/#this-type-uses","title":"This type uses","text":"
    • None
    "},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_version_msgs/srv/InterfaceVersion/#this-type-is-used-by","title":"This type is used by","text":"
    • None
    "},{"location":"design/autoware-interfaces/ad-api/use-cases/change-operation-mode/","title":"Change the operation mode","text":""},{"location":"design/autoware-interfaces/ad-api/use-cases/change-operation-mode/#change-the-operation-mode","title":"Change the operation mode","text":""},{"location":"design/autoware-interfaces/ad-api/use-cases/change-operation-mode/#related-api","title":"Related API","text":"
    • Operation mode
    "},{"location":"design/autoware-interfaces/ad-api/use-cases/change-operation-mode/#sequence","title":"Sequence","text":"
    • Change the mode with software switch.

    • Change the mode with hardware switch.

    "},{"location":"design/autoware-interfaces/ad-api/use-cases/drive-designated-position/","title":"Drive to the designated position","text":""},{"location":"design/autoware-interfaces/ad-api/use-cases/drive-designated-position/#drive-to-the-designated-position","title":"Drive to the designated position","text":""},{"location":"design/autoware-interfaces/ad-api/use-cases/drive-designated-position/#related-api","title":"Related API","text":"
    • Operation mode
    • Routing
    "},{"location":"design/autoware-interfaces/ad-api/use-cases/drive-designated-position/#sequence","title":"Sequence","text":""},{"location":"design/autoware-interfaces/ad-api/use-cases/get-on-off/","title":"Get on and get off","text":""},{"location":"design/autoware-interfaces/ad-api/use-cases/get-on-off/#get-on-and-get-off","title":"Get on and get off","text":""},{"location":"design/autoware-interfaces/ad-api/use-cases/get-on-off/#related-api","title":"Related API","text":"
    • Vehicle doors
    "},{"location":"design/autoware-interfaces/ad-api/use-cases/get-on-off/#sequence","title":"Sequence","text":""},{"location":"design/autoware-interfaces/ad-api/use-cases/initialize-pose/","title":"Initialize the pose","text":""},{"location":"design/autoware-interfaces/ad-api/use-cases/initialize-pose/#initialize-the-pose","title":"Initialize the pose","text":""},{"location":"design/autoware-interfaces/ad-api/use-cases/initialize-pose/#related-api","title":"Related API","text":"
    • Localization
    "},{"location":"design/autoware-interfaces/ad-api/use-cases/initialize-pose/#sequence","title":"Sequence","text":"
    • Initialization of the pose using input.

    • Initialization of the pose using GNSS.

    "},{"location":"design/autoware-interfaces/ad-api/use-cases/launch-terminate/","title":"Launch and terminate","text":""},{"location":"design/autoware-interfaces/ad-api/use-cases/launch-terminate/#launch-and-terminate","title":"Launch and terminate","text":""},{"location":"design/autoware-interfaces/ad-api/use-cases/launch-terminate/#related-api","title":"Related API","text":"
    • T.B.D.
    "},{"location":"design/autoware-interfaces/ad-api/use-cases/launch-terminate/#sequence","title":"Sequence","text":""},{"location":"design/autoware-interfaces/ad-api/use-cases/system-monitoring/","title":"System monitoring","text":""},{"location":"design/autoware-interfaces/ad-api/use-cases/system-monitoring/#system-monitoring","title":"System monitoring","text":""},{"location":"design/autoware-interfaces/ad-api/use-cases/system-monitoring/#heartbeat","title":"Heartbeat","text":"

    Heartbeat is a reference for checking whether communication with Autoware is being performed properly.

    "},{"location":"design/autoware-interfaces/ad-api/use-cases/system-monitoring/#diagnostics","title":"Diagnostics","text":"

    Diagnostics is a set of error levels for each functional unit of Autoware. The design and structure of functional units are system dependent. This is useful to identify the cause of an abnormality.

    "},{"location":"design/autoware-interfaces/ad-api/use-cases/vehicle-monitoring/","title":"Vehicle monitoring","text":""},{"location":"design/autoware-interfaces/ad-api/use-cases/vehicle-monitoring/#vehicle-monitoring","title":"Vehicle monitoring","text":"

    AD API provides current vehicle status for remote monitoring, visualization for passengers, etc. Use the API below depending on the data you want to monitor.

    "},{"location":"design/autoware-interfaces/ad-api/use-cases/vehicle-monitoring/#vehicle-status","title":"Vehicle status","text":"

    The vehicle status provides basic information such as kinematics, indicators, and dimensions. This allows a remote operator to know the position and velocity of the vehicle. For applications such as FMS, it can help find vehicles that need assistance, such as vehicles that are stuck or brake suddenly. It is also possible to determine the actual distance to an object from the vehicle dimensions.

    "},{"location":"design/autoware-interfaces/ad-api/use-cases/vehicle-monitoring/#planning-factors","title":"Planning factors","text":"

    The planning factors provides the planning status of the vehicle. HMI can use this to warn of sudden movements of the vehicle, and to share the stop reason with passengers for comfortable driving.

    "},{"location":"design/autoware-interfaces/ad-api/use-cases/vehicle-monitoring/#detected-objects","title":"Detected objects","text":"

    The perception provides the objects detected by Autoware. HMI can use this to visualize objects around the vehicle.

    "},{"location":"design/autoware-interfaces/ad-api/use-cases/vehicle-operation/","title":"Vehicle operation","text":""},{"location":"design/autoware-interfaces/ad-api/use-cases/vehicle-operation/#vehicle-operation","title":"Vehicle operation","text":""},{"location":"design/autoware-interfaces/ad-api/use-cases/vehicle-operation/#request-to-intervene","title":"Request to intervene","text":"

    Request to intervene (RTI) is a feature that requires the operator to switch to manual driving mode. It is also called Take Over Request (TOR). Interfaces for RTI are currently being discussed. For now assume that manual driving is requested if the MRM state is not NORMAL. See fail-safe for details.

    "},{"location":"design/autoware-interfaces/ad-api/use-cases/vehicle-operation/#request-to-cooperate","title":"Request to cooperate","text":"

    Request to cooperate (RTC) is a feature that the operator supports the decision in autonomous driving mode. Autoware usually drives the vehicle using its own decisions, but the operator may prefer to make their own decisions in complex situations. Since RTC only overrides the decision and does not need to change operation mode, the vehicle can continue autonomous driving, unlike RTC. See cooperation for details.

    "},{"location":"design/autoware-interfaces/ad-api/use-cases/manual-control/","title":"Manual control","text":""},{"location":"design/autoware-interfaces/ad-api/use-cases/manual-control/#manual-control","title":"Manual control","text":""},{"location":"design/autoware-interfaces/ad-api/use-cases/manual-control/#description","title":"Description","text":"

    To operate a vehicle without a steering wheel or steering wheel, or to develop a remote operation system, an interface to manually drive a vehicle using Autoware is required.

    "},{"location":"design/autoware-interfaces/ad-api/use-cases/manual-control/#requirements","title":"Requirements","text":"
    • The vehicle can transition to a safe state when a communication problem occurs.
    • Supports the following commands and statuses.
      • Pedal or acceleration as longitudinal control
      • Steering tire angle as lateral control
      • Gear
      • Turn indicators
      • Hazard lignts
    • The following are under consideration.
      • Headlights
      • Wipers
      • Parking brake
      • Horn
    "},{"location":"design/autoware-interfaces/ad-api/use-cases/manual-control/#sequence","title":"Sequence","text":""},{"location":"design/autoware-interfaces/ad-api/use-cases/manual-control/#related-features","title":"Related features","text":"
    • Manual control
    • Vehicle status
    "},{"location":"design/autoware-interfaces/components/","title":"Component interfaces","text":""},{"location":"design/autoware-interfaces/components/#component-interfaces","title":"Component interfaces","text":"

    Warning

    Under Construction

    See here for an overview.

    "},{"location":"design/autoware-interfaces/components/control/","title":"Control","text":""},{"location":"design/autoware-interfaces/components/control/#control","title":"Control","text":""},{"location":"design/autoware-interfaces/components/control/#inputs","title":"Inputs","text":""},{"location":"design/autoware-interfaces/components/control/#vehicle-kinematic-state","title":"Vehicle kinematic state","text":"

    Current position and orientation of ego. Published by the Localization module.

    • nav_msgs/Odometry
      • std_msgs/Header header
      • string child_frame_id
      • geometry_msgs/PoseWithCovariance pose
      • geometry_msgs/TwistWithCovariance twist
    "},{"location":"design/autoware-interfaces/components/control/#trajectory","title":"Trajectory","text":"

    trajectory to be followed by the controller. See Outputs of Planning.

    "},{"location":"design/autoware-interfaces/components/control/#steering-status","title":"Steering Status","text":"

    Current steering of the ego vehicle. Published by the Vehicle Interface.

    • Steering message (github discussion).
      • builtin_interfaces::msg::Time stamp
      • float32 steering_angle
    "},{"location":"design/autoware-interfaces/components/control/#actuation-status","title":"Actuation Status","text":"

    Actuation status of the ego vehicle for acceleration, steering, and brake.

    TODO This represents the reported physical efforts exerted by the vehicle actuators. Published by the Vehicle Interface.

    • ActuationStatus (github discussion).
      • builtin_interfaces::msg::Time stamp
      • float32 acceleration
      • float32 steering
    "},{"location":"design/autoware-interfaces/components/control/#output","title":"Output","text":""},{"location":"design/autoware-interfaces/components/control/#vehicle-control-command","title":"Vehicle Control Command","text":"

    A motion signal to drive the vehicle, achieved by the low-level controller in the vehicle layer. Used by the Vehicle Interface.

    • autoware_control_msgs/Control
      • builtin_interfaces::msg::Time stamp
      • autoware_control_msgs/Lateral lateral
        • builtin_interfaces::msg::Time stamp
        • float steering_tire_angle
        • float steering_tire_rotation_rate
      • autoware_control_msgs/Longitudinal longitudinal
        • builtin_interfaces::msg::Time stamp
        • builtin_interfaces::msg::Duration duration
        • builtin_interfaces::msg::Duration time_step
        • float[] speeds
        • float[] accelerations
        • float[] jerks
    "},{"location":"design/autoware-interfaces/components/localization/","title":"Localization","text":""},{"location":"design/autoware-interfaces/components/localization/#localization","title":"Localization","text":""},{"location":"design/autoware-interfaces/components/localization/#inputs","title":"Inputs","text":""},{"location":"design/autoware-interfaces/components/localization/#pointcloud-map","title":"Pointcloud Map","text":"

    Environment map created with point cloud, published by the map server.

    • sensor_msgs/msg/PointCloud2

    A 3d point cloud map is used for LiDAR-based localization in Autoware.

    "},{"location":"design/autoware-interfaces/components/localization/#manual-initial-pose","title":"Manual Initial Pose","text":"

    Start pose of ego, published by the user interface.

    • geometry_msgs/msg/PoseWithCovarianceStamped
      • std_msgs/msg/Header header
      • geometry_msgs/msg/PoseWithCovariance pose
        • geometry_msgs/msg/Pose pose
          • geometry_msgs/msg/Point position
          • geometry_msg/msg/Quaternion orientation
        • double[36] covariance
    "},{"location":"design/autoware-interfaces/components/localization/#3d-lidar-scanning","title":"3D-LiDAR Scanning","text":"

    LiDAR scanning for NDT matching, published by the LiDAR sensor.

    • sensor_msgs/msg/PointCloud2

    The raw 3D-LiDAR data needs to be processed by the point cloud pre-processing modules before being used for localization.

    "},{"location":"design/autoware-interfaces/components/localization/#automatic-initial-pose","title":"Automatic Initial pose","text":"

    Start pose of ego, calculated from INS(Inertial navigation sensor) sensing data.

    • geometry_msgs/msg/PoseWithCovarianceStamped
      • std_msgs/msg/Header header
      • geometry_msgs/msg/PoseWithCovariance pose
        • geometry_msgs/msg/Pose pose
          • geometry_msgs/msg/Point position
          • geometry_msg/msg/Quaternion orientation
        • double[36] covariance

    When the initial pose is not set manually, the message can be used for automatic pose initialization.

    Current Geographic coordinate of the ego, published by the GNSS sensor.

    • sensor_msgs/msg/NavSatFix
      • std_msgs/msg/Header header
      • sensor_msgs/msg/NavSatStatus status
      • double latitude
      • double longitude
      • double altitude
      • double[9] position_covariance
      • unit8 position_covariance_type

    Current orientation of the ego, published by the GNSS-INS.

    • autoware_sensing_msgs/msg/GnssInsOrientationStamped
      • std_msgs/Header header
      • autoware_sensing_msgs/msg/GnssInsOrientation orientation
        • geometry_msgs/Quaternion orientation
        • float32 rmse_rotation_x
        • float32 rmse_rotation_y
        • float32 rmse_rotation_z
    "},{"location":"design/autoware-interfaces/components/localization/#imu-data","title":"IMU Data","text":"

    Current orientation, angular velocity and linear acceleration of ego, calculated from IMU sensing data.

    • sensor_msgs/msg/Imu
      • std_msgs/msg/Header header
      • geometry_msgs/msg/Quaternion orientation
      • double[9] orientation_covariance
      • geometry_msgs/msg/Vector3 angular_velocity
      • double[9] angular_velocity_covariance
      • geometry_msgs/msg/Vector3 linear_acceleration
      • double[9] linear_acceleration_covariance
    "},{"location":"design/autoware-interfaces/components/localization/#vehicle-velocity-status","title":"Vehicle Velocity Status","text":"

    Current velocity of the ego vehicle, published by the vehicle interface.

    • autoware_vehicle_msgs/msg/VelocityReport
      • std_msgs/msg/Header header;
      • float longitudinal_velocity;
      • float lateral_velocity;
      • float heading_rate;

    Before the velocity input localization interface, module vehicle_velocity_converter converts message type autoware_vehicle_msgs/msg/VelocityReport to geometry_msgs/msg/TwistWithCovarianceStamped.

    "},{"location":"design/autoware-interfaces/components/localization/#outputs","title":"Outputs","text":""},{"location":"design/autoware-interfaces/components/localization/#vehicle-pose","title":"Vehicle pose","text":"

    Current pose of ego, calculated from localization interface.

    • geometry_msgs/msg/PoseWithCovarianceStamped
      • std_msgs/msg/Header header
      • geometry_msg/PoseWithCovariance pose
        • geometry_msgs/msg/Pose pose
          • geometry_msgs/msg/Point position
          • geometry_msgs/msg/Quaternion orientation
        • double[36] covariance
    "},{"location":"design/autoware-interfaces/components/localization/#vehicle-velocity","title":"Vehicle velocity","text":"

    Current velocity of ego, calculated from localization interface.

    • geometry_msgs/msg/TwistWithCovarianceStamped
      • std_msgs/msg/Header header
      • geometry_msg/TwistWithCovariance twist
        • geometry_msgs/msg/Twist twist
          • geometry_msgs/msg/Vector3 linear
          • geometry_msgs/msg/Vector3 angular
        • double[36] covariance
    "},{"location":"design/autoware-interfaces/components/localization/#vehicle-acceleration","title":"Vehicle acceleration","text":"

    Current acceleration of ego, calculated from localization interface.

    • geometry_msgs/msg/AccelWithCovarianceStamped
      • std_msgs/msg/Header header
      • geometry_msg/AccelWithCovariance accel
        • geometry_msgs/msg/Accel accel
          • geometry_msgs/msg/Vector3 linear
          • geometry_msgs/msg/Vector3 angular
        • double[36] covariance
    "},{"location":"design/autoware-interfaces/components/localization/#vehicle-kinematic-state","title":"Vehicle kinematic state","text":"

    Current pose, velocity and acceleration of ego, calculated from localization interface.

    Note: Kinematic state contains pose, velocity and acceleration. In the future, pose, velocity and acceleration will not be used as output for localization.

    • autoware_msgs/autoware_localization_msgs/msg/KinematicState
      • std_msgs/msg/Header header
      • string child_frame_id
      • geometry_msgs/PoseWithCovariance pose_with_covariance
      • geometry_msgs/TwistWithCovariance twist_with_covariance
      • geometry_msgs/AccelWithCovariance accel_with_covariance

    The message will be subscribed by the planning and control module.

    "},{"location":"design/autoware-interfaces/components/localization/#localization-accuracy","title":"Localization Accuracy","text":"

    Diagnostics information that indicates if the localization module works properly.

    TBD.

    "},{"location":"design/autoware-interfaces/components/map/","title":"Map","text":""},{"location":"design/autoware-interfaces/components/map/#map","title":"Map","text":""},{"location":"design/autoware-interfaces/components/map/#overview","title":"Overview","text":"

    Autoware relies on high-definition point cloud maps and vector maps of the driving environment to perform various tasks. Before launching Autoware, you need to load the pre-created map files.

    "},{"location":"design/autoware-interfaces/components/map/#inputs","title":"Inputs","text":"
    • Point cloud maps (.pcd)
    • Lanelet2 maps (.osm)

    Refer to Creating maps on how to create maps.

    "},{"location":"design/autoware-interfaces/components/map/#outputs","title":"Outputs","text":""},{"location":"design/autoware-interfaces/components/map/#point-cloud-map","title":"Point cloud map","text":"

    It loads point cloud files and publishes the maps to the other Autoware nodes in various configurations. Currently, it supports the following types:

    • Raw point cloud map (sensor_msgs/msg/PointCloud2)
    • Downsampled point cloud map (sensor_msgs/msg/PointCloud2)
    • Partial point cloud map loading via ROS service (autoware_map_msgs/srv/GetPartialPointCloudMap)
    • Differential point cloud map loading via ROS service (autoware_map_msgs/srv/GetDifferentialPointCloudMap)
    "},{"location":"design/autoware-interfaces/components/map/#lanelet2-map","title":"Lanelet2 map","text":"

    It loads a Lanelet2 file and publishes the map data as autoware_map_msgs/msg/LaneletMapBin message. The lan/lon coordinates are projected onto the MGRS coordinates.

    • autoware_map_msgs/msg/LaneletMapBin
      • std_msgs/Header header
      • string version_map_format
      • string version_map
      • string name_map
      • uint8[] data
    "},{"location":"design/autoware-interfaces/components/map/#lanelet2-map-visualization","title":"Lanelet2 map visualization","text":"

    Visualize autoware_map_msgs/msg/LaneletMapBin messages in Rviz.

    • visualization_msgs/msg/MarkerArray
    "},{"location":"design/autoware-interfaces/components/perception-interface/","title":"Perception","text":""},{"location":"design/autoware-interfaces/components/perception-interface/#perception","title":"Perception","text":"
    graph TD\n    cmp_sen(\"Sensing\"):::cls_sen\n    cmp_loc(\"Localization\"):::cls_loc\n    cmp_per(\"Perception\"):::cls_per\n    cmp_plan(\"Planning\"):::cls_plan\n\n    msg_img(\"<font size=2><b>Camera Image</b></font size>\n    <font size=1>sensor_msgs/Image</font size>\"):::cls_sen\n\n    msg_ldr(\"<font size=2><b>Lidar Point Cloud</b></font size>\n    <font size=1>sensor_msgs/PointCloud2</font size>\"):::cls_sen\n\n    msg_lanenet(\"<font size=2><b>Lanelet2 Map</b></font size>\n    <font size=1>autoware_map_msgs/LaneletMapBin</font size>\"):::cls_loc\n\n    msg_vks(\"<font size=2><b>Vehicle Kinematic State</b></font size>\n    <font size=1>nav_msgs/Odometry</font size>\"):::cls_loc\n\n    msg_obj(\"<font size=2><b>3D Object Predictions </b></font size>\n    <font size=1>autoware_perception_msgs/PredictedObjects</font size>\"):::cls_per\n\n    msg_tl(\"<font size=2><b>Traffic Light Response </b></font size>\n    <font size=1>autoware_perception_msgs/TrafficSignalArray</font size>\"):::cls_per\n\n    msg_tq(\"<font size=2><b>Traffic Light Query </b></font size>\n    <font size=1>TBD</font size>\"):::cls_plan\n\n\n    cmp_sen --> msg_img --> cmp_per\n    cmp_sen --> msg_ldr --> cmp_per\n    cmp_per --> msg_obj --> cmp_plan\n    cmp_per --> msg_tl --> cmp_plan\n    cmp_plan --> msg_tq -->cmp_per\n\n    cmp_loc --> msg_vks --> cmp_per\n    cmp_loc --> msg_lanenet --> cmp_per\n\nclassDef cmp_sen fill:#F8CECC,stroke:#999,stroke-width:1px;\nclassDef cls_loc fill:#D5E8D4,stroke:#999,stroke-width:1px;\nclassDef cls_per fill:#FFF2CC,stroke:#999,stroke-width:1px;\nclassDef cls_plan fill:#5AB8FF,stroke:#999,stroke-width:1px;
    "},{"location":"design/autoware-interfaces/components/perception-interface/#inputs","title":"Inputs","text":""},{"location":"design/autoware-interfaces/components/perception-interface/#pointcloud","title":"PointCloud","text":"

    PointCloud data published by Lidar.

    • sensor_msgs/msg/PointCloud2
    "},{"location":"design/autoware-interfaces/components/perception-interface/#image","title":"Image","text":"

    Image frame captured by camera.

    • sensor_msgs/msg/Image
    "},{"location":"design/autoware-interfaces/components/perception-interface/#vehicle-kinematic-state","title":"Vehicle kinematic state","text":"

    current position of ego, used in traffic signals recognition. See output of Localization.

    "},{"location":"design/autoware-interfaces/components/perception-interface/#lanelet2-map","title":"Lanelet2 Map","text":"

    map of the environment. See outputs of Map.

    "},{"location":"design/autoware-interfaces/components/perception-interface/#output","title":"Output","text":""},{"location":"design/autoware-interfaces/components/perception-interface/#3d-object-predictions","title":"3D Object Predictions","text":"

    3D Objects detected, tracked and predicted by sensor fusing.

    • autoware_perception_msgs/msg/PredictedObjects
      • std_msgs/Header header
      • sequence<autoware_perception_msgs::msg::PredictedObject> objects
        • unique_identifier_msgs::msg::UUID uuid
        • float existence_probability
        • sequence<autoware_perception_msgs::msg::ObjectClassification> classification
          • uint8 classification
          • float probability
        • autoware_perception_msgs::msg::PredictedObjectKinematics kinematics
          • geometry_msgs::msg::PoseWithCovariance initial_pose
          • geometry_msgs::msg::TwistWithCovariance
          • geometry_msgs::msg::AccelWithCovariance initial_acceleration
        • sequence<autoware_perception_msgs::msg::PredictedPath, 10> predicted_paths
          • sequence<geometry_msgs::msg::Pose, 100> path
          • builtin_interfaces::msg::Duration time_step
          • float confidence
        • sequence<autoware_perception_msgs::msg::Shape, 5> shape
          • geometry_msgs::msg::Polygon polygon
          • float height
    "},{"location":"design/autoware-interfaces/components/perception-interface/#traffic-signals","title":"Traffic Signals","text":"

    traffic signals recognized by object detection model.

    • autoware_perception_msgs::msg::TrafficSignalArray
      • autoware_perception_msgs::msg::TrafficSignal signals
        • autoware_perception_msgs::msg::TrafficSignalElement elements
          • uint8 UNKNOWN = 0
          • uint8 Red = 1
          • uint8 AMBER = 2
          • uint8 WHITE = 4
          • uint8 CIRCLE = 1
          • uint8 LEFT_ARROW = 2
          • uint8 RIGHT_ARROW = 3
          • uint8 UP_ARROW = 4
          • uint8 UP_LEFT_ARROW=5
          • uint8 UP_RIGHT_ARROW=6
          • uint8 DOWN_ARROW = 7
          • uint8 DOWN_LEFT_ARROW = 8
          • uint8 DOWN_RIGHT_ARROW = 9
          • uint8 CROSS = 10
          • uint8 SOLID_OFF = 1
          • uint8 SOLID_ON = 2
          • uint8 FLASHING = 3
          • uint8 color
          • uint8 shape
          • uint8 status
          • float32 confidence
        • int64 traffic_signal_id
      • builtin_interfaces::msg::Time stamp
    "},{"location":"design/autoware-interfaces/components/perception/","title":"Perception","text":""},{"location":"design/autoware-interfaces/components/perception/#perception","title":"Perception","text":"

    This page provides specific specifications about the Interface of the Perception Component. Please refer to the perception architecture reference implementation design for concepts and data flow.

    "},{"location":"design/autoware-interfaces/components/perception/#input","title":"Input","text":""},{"location":"design/autoware-interfaces/components/perception/#from-map-component","title":"From Map Component","text":"Name Topic / Service Type Description Vector Map /map/vector_map autoware_map_msgs/msg/LaneletMapBin HD Map including the information about lanes Point Cloud Map /service/get_differential_pcd_map autoware_map_msgs/srv/GetDifferentialPointCloudMap Point Cloud Map

    Notes:

    • Point Cloud Map
      • input can be both topic or service, but we highly recommend to use service because since this interface enables processing without being constrained by map file size limits.
    "},{"location":"design/autoware-interfaces/components/perception/#from-sensing-component","title":"From Sensing Component","text":"Name Topic Type Description Camera Image /sensing/camera/camera*/image_rect_color sensor_msgs/Image Camera image data, processed with Lens Distortion Correction (LDC) Camera Image /sensing/camera/camera*/image_raw sensor_msgs/Image Camera image data, not processed with Lens Distortion Correction (LDC) Point Cloud /sensing/lidar/concatenated/pointcloud sensor_msgs/PointCloud2 Concatenated point cloud from multiple LiDAR sources Radar Object /sensing/radar/detected_objects autoware_perception_msgs/msg/DetectedObject Radar objects"},{"location":"design/autoware-interfaces/components/perception/#from-localization-component","title":"From Localization Component","text":"Name Topic Type Description Vehicle Odometry /localization/kinematic_state nav_msgs/msg/Odometry Ego vehicle odometry topic"},{"location":"design/autoware-interfaces/components/perception/#from-api","title":"From API","text":"Name Topic Type Description External Traffic Signals /external/traffic_signals autoware_perception_msgs::msg::TrafficSignalArray The traffic signals from an external system"},{"location":"design/autoware-interfaces/components/perception/#output","title":"Output","text":""},{"location":"design/autoware-interfaces/components/perception/#to-planning","title":"To Planning","text":"

    Please refer to the perception component design for detailed definitions of each output.\"

    Name Topic Type Description Dynamic Objects /perception/object_recognition/objects autoware_perception_msgs/msg/PredictedObjects Set of dynamic objects with information such as a object class and a shape of the objects. These objects did not exist when the map was generated and are not contained within the map. Obstacles /perception/obstacle_segmentation/pointcloud sensor_msgs/PointCloud2 Obstacles, including both dynamic objects and static obstacles that requires the ego vehicle either steer clear of them or come to a stop in front of the obstacles. Occupancy Grid Map /perception/occupancy_grid_map/map nav_msgs/msg/OccupancyGrid The map with the information about the presence of obstacles and blind spot Traffic Signal /perception/traffic_light_recognition/traffic_signals autoware_perception_msgs::msg::TrafficLightGroupArray The traffic signal information such as a color (green, yellow, read) and an arrow (right, left, straight)"},{"location":"design/autoware-interfaces/components/planning/","title":"Planning","text":""},{"location":"design/autoware-interfaces/components/planning/#planning","title":"Planning","text":"

    This page provides specific specifications about the Interface of the Planning Component. Please refer to the planning architecture design document for high-level concepts and data flow.

    TODO: The detailed definitions (meanings of elements included in each topic) are not described yet, need to be updated.

    "},{"location":"design/autoware-interfaces/components/planning/#input","title":"Input","text":""},{"location":"design/autoware-interfaces/components/planning/#from-map-component","title":"From Map Component","text":"Name Topic Type Description Vector Map /map/vector_map autoware_map_msgs/msg/LaneletMapBin Map of the environment where the planning takes place."},{"location":"design/autoware-interfaces/components/planning/#from-localization-component","title":"From Localization Component","text":"Name Topic Type Description Vehicle Kinematic State /localization/kinematic_state nav_msgs/msg/Odometry Current position, orientation and velocity of ego. Vehicle Acceleration /localization/acceleration geometry_msgs/msg/AccelWithCovarianceStamped Current acceleration of ego.

    TODO: acceleration information should be merged into the kinematic state.

    "},{"location":"design/autoware-interfaces/components/planning/#from-perception-component","title":"From Perception Component","text":"Name Topic Type Description Objects /perception/object_recognition/objects autoware_perception_msgs/msg/PredictedObjects Set of perceived objects around ego that need to be avoided or followed when planning a trajectory. This contains semantics information such as a object class (e.g. vehicle, pedestrian, etc) or a shape of the objects. Obstacles /perception/obstacle_segmentation/pointcloud sensor_msgs/msg/PointCloud2 Set of perceived obstacles around ego that need to be avoided or followed when planning a trajectory. This only contains a primitive information of the obstacle. No shape nor velocity information. Occupancy Grid Map /perception/occupancy_grid_map/map nav_msgs/msg/OccupancyGrid Contains the presence of obstacles and blind spot information (represented as UNKNOWN). Traffic Signal /perception/traffic_light_recognition/traffic_signals autoware_perception_msgs::msg::TrafficLightGroupArray Contains the traffic signal information such as a color (green, yellow, read) and an arrow (right, left, straight).

    TODO: The type of the Obstacles information should not depend on the specific sensor message type (now PointCloud). It needs to be fixed.

    "},{"location":"design/autoware-interfaces/components/planning/#from-api","title":"From API","text":"Name Topic Type Description Max Velocity /planning/scenario_planning/max_velocity_default autoware_adapi_v1_msgs/srv/SetRoutePoints Indicate the maximum value of the vehicle speed plan Operation Mode /system/operation_mode/state autoware_adapi_v1_msgs/msg/OperationModeState Indicates the current operation mode (automatic/manual, etc.). Route Set /planning/mission_planning/set_route autoware_adapi_v1_msgs/srv/SetRoute Indicates to set the route when the vehicle is stopped. Route Points Set /planning/mission_planning/set_route_points autoware_adapi_v1_msgs/srv/SetRoutePoints Indicates to set the route with points when the vehicle is stopped. Route Change /planning/mission_planning/change_route autoware_adapi_v1_msgs/srv/SetRoute Indicates to change the route when the vehicle is moving. Route Points Change /planning/mission_planning/change_route_points autoware_adapi_v1_msgs/srv/SetRoutePoints Indicates to change the route with points when the vehicle is moving. Route Clear /planning/mission_planning/clear_route autoware_adapi_v1_msgs/srv/ClearRoute Indicates to clear the route information. MRM Route Set Points /planning/mission_planning/mission_planner/srv/set_mrm_route autoware_adapi_v1_msgs/srv/SetRoutePoints Indicates to set the emergency route. MRM Route Clear /planning/mission_planning/mission_planner/srv/clear_mrm_route autoware_adapi_v1_msgs/srv/SetRoutePoints Indicates to clear the emergency route."},{"location":"design/autoware-interfaces/components/planning/#output","title":"Output","text":""},{"location":"design/autoware-interfaces/components/planning/#to-control","title":"To Control","text":"Name Topic Type Description Trajectory /planning/trajectory autoware_planning_msgs/msg/Trajectory A sequence of space and velocity and acceleration points to be followed by the controller. Turn Indicator /planning/turn_indicators_cmd autoware_vehicle_msgs/msg/TurnIndicatorsCommand Turn indicator signal to be followed by the vehicle. Hazard Light /planning/hazard_lights_cmd autoware_vehicle_msgs/msg/HazardLightsCommand Hazard light signal to be followed by the vehicle."},{"location":"design/autoware-interfaces/components/planning/#to-system","title":"To System","text":"Name Topic Type Description Diagnostics /planning/hazard_lights_cmd diagnostic_msgs/msg/DiagnosticArray Diagnostic status of the Planning component reported to the System component."},{"location":"design/autoware-interfaces/components/planning/#to-api","title":"To API","text":"Name Topic Type Description Path Candidate /planning/path_candidate/* autoware_planning_msgs/msg/Path The path Autoware is about to take. Users can interrupt the operation based on the path candidate information. Steering Factor /planning/steering_factor/* autoware_adapi_v1_msgs/msg/SteeringFactorArray Information about the steering maneuvers performed by Autoware (e.g., steering to the right for a right turn, etc.) Velocity Factor /planning/velocity_factors/* autoware_adapi_v1_msgs/msg/VelocityFactorArray Information about the velocity maneuvers performed by Autoware (e.g., stop for an obstacle, etc.)"},{"location":"design/autoware-interfaces/components/planning/#planning-internal-interface","title":"Planning internal interface","text":"

    This section explains the communication between the different planning modules shown in the Planning Architecture Design.

    "},{"location":"design/autoware-interfaces/components/planning/#from-mission-planning-to-scenario-planning","title":"From Mission Planning to Scenario Planning","text":"Name Topic Type Description Route /planning/mission_planning/route autoware_planning_msgs/msg/LaneletRoute A sequence of lane IDs on a Lanelet map, from the starting point to the destination."},{"location":"design/autoware-interfaces/components/planning/#from-behavior-planning-to-motion-planning","title":"From Behavior Planning to Motion Planning","text":"Name Topic Type Description Path /planning/scenario_planning/lane_driving/behavior_planning/path autoware_planning_msgs/msg/Path A sequence of approximate vehicle positions for driving, along with information on the maximum speed and the drivable areas. Modules receiving this message are expected to make changes to the path within the constraints of the drivable areas and the maximum speed, generating the desired final trajectory."},{"location":"design/autoware-interfaces/components/planning/#from-scenario-planning-to-validation","title":"From Scenario Planning to Validation","text":"Name Topic Type Description Trajectory /planning/scenario_planning/trajectory autoware_planning_msgs/msg/Trajectory A sequence of precise vehicle positions, speeds, and accelerations required for driving. It is expected that the vehicle will follow this trajectory."},{"location":"design/autoware-interfaces/components/sensing/","title":"Sensing","text":""},{"location":"design/autoware-interfaces/components/sensing/#sensing","title":"Sensing","text":"
    graph TD\n    cmp_drv(\"Drivers\"):::cls_drv\n    cmp_loc(\"Localization\"):::cls_loc\n    cmp_per(\"Perception\"):::cls_per\n    cmp_sen(\"Preprocessors\"):::cls_sen\n    msg_ult(\"<font size=2><b>Ultrasonics</b></font size>\n    <font size=1>sensor_msgs/Range</font size>\"):::cls_drv\n    msg_img(\"<font size=2><b>Camera Image</b></font size>\n    <font size=1>sensor_msgs/Image</font size>\"):::cls_drv\n    msg_ldr(\"<font size=2><b>Lidar Point Cloud</b></font size>\n    <font size=1>sensor_msgs/PointCloud2</font size>\"):::cls_drv\n    msg_rdr_t(\"<font size=2><b>Radar Tracks</b></font size>\n    <font size=1>radar_msgs/RadarTracks</font size>\"):::cls_drv\n    msg_rdr_s(\"<font size=2><b>Radar Scan</b></font size>\n    <font size=1>radar_msgs/RadarScan</font size>\"):::cls_drv\n    msg_gnss(\"<font size=2><b>GNSS-INS Position</b></font size>\n    <font size=1>sensor_msgs/NavSatFix</font size>\"):::cls_drv\n    msg_gnssori(\"<font size=2><b>GNSS-INS Orientation</b></font size>\n    <font size=1>autoware_sensing_msgs/GnssInsOrientationStamped</font size>\"):::cls_drv\n    msg_gnssvel(\"<font size=2><b>GNSS Velocity</b></font size>\n    <font size=1>geometry_msgs/TwistWithCovarianceStamped</font size>\"):::cls_drv\n    msg_gnssacc(\"<font size=2><b>GNSS Acceleration</b></font size>\n    <font size=1>geometry_msgs/AccelWithCovarianceStamped</font size>\"):::cls_drv\n    msg_ult_sen(\"<font size=2><b>Ultrasonics</b></font size>\n    <font size=1>sensor_msgs/Range</font size>\"):::cls_sen\n    msg_img_sen(\"<font size=2><b>Camera Image</b></font size>\n    <font size=1>sensor_msgs/Image</font size>\"):::cls_sen\n    msg_pc_combined_rdr(\"<font size=2><b>Combined Radar Tracks</b></font size>\n    <font size=1>radar_msgs/RadarTracks</font size>\"):::cls_sen\n    msg_pc_rdr(\"<font size=2><b>Radar Pointcloud</b></font size>\n    <font size=1>radar_msgs/RadarScan</font size>\"):::cls_sen\n    msg_pc_ldr(\"<font size=2><b>Lidar Point Cloud</b></font size>\n    <font size=1>sensor_msgs/PointCloud2</font size>\"):::cls_sen\n    msg_pose_gnss(\"<font size=2><b>GNSS-INS Pose</b></font size>\n    <font size=1>geometry_msgs/PoseWithCovarianceStamped</font size>\"):::cls_sen\n    msg_gnssori_sen(\"<font size=2><b>GNSS-INS Orientation</b></font size>\n    <font size=1>sensor_msgs/Imu</font size>\"):::cls_sen\n    msg_gnssvel_sen(\"<font size=2><b>GNSS Velocity</b></font size>\n    <font size=1>geometry_msgs/TwistWithCovarianceStamped</font size>\"):::cls_sen\n    msg_gnssacc_sen(\"<font size=2><b>GNSS-INS Acceleration</b></font size>\n    <font size=1>geometry_msgs/AccelWithCovarianceStamped</font size>\"):::cls_sen\n\n    cmp_drv --> msg_ult --> cmp_sen\n    cmp_drv --> msg_img --> cmp_sen\n    cmp_drv --> msg_rdr_t --> cmp_sen\n    cmp_drv --> msg_rdr_s --> cmp_sen\n    cmp_drv --> msg_ldr --> cmp_sen\n    cmp_drv --> msg_gnss --> cmp_sen\n    cmp_drv --> msg_gnssori --> cmp_sen\n    cmp_drv --> msg_gnssvel --> cmp_sen\n    cmp_drv --> msg_gnssacc --> cmp_sen\n\n    cmp_sen --> msg_ult_sen\n    cmp_sen --> msg_img_sen\n    cmp_sen --> msg_gnssori_sen\n    cmp_sen --> msg_gnssvel_sen\n    cmp_sen --> msg_pc_combined_rdr\n    cmp_sen --> msg_pc_rdr\n    cmp_sen --> msg_pc_ldr\n    cmp_sen --> msg_pose_gnss\n    cmp_sen --> msg_gnssacc_sen\n    msg_ult_sen --> cmp_per\n    msg_img_sen --> cmp_per\n    msg_pc_combined_rdr --> cmp_per\n    msg_pc_rdr --> cmp_per\n    msg_pc_ldr --> cmp_per\n    msg_pc_ldr --> cmp_loc\n    msg_pose_gnss --> cmp_loc\n    msg_gnssori_sen --> cmp_loc\n    msg_gnssvel_sen --> cmp_loc\n    msg_gnssacc_sen --> cmp_loc\nclassDef cls_drv fill:#F8CECC,stroke:#999,stroke-width:1px;\nclassDef cls_loc fill:#D5E8D4,stroke:#999,stroke-width:1px;\nclassDef cls_per fill:#FFF2CC,stroke:#999,stroke-width:1px;\nclassDef cls_sen fill:#FFE6CC,stroke:#999,stroke-width:1px;
    "},{"location":"design/autoware-interfaces/components/sensing/#inputs","title":"Inputs","text":"Name Topic Type Description Ultrasonics T.B.D sensor_msgs/Range Distance data from ultrasonic radar driver. Camera Image T.B.D sensor_msgs/Image Image data from camera driver. Radar Tracks T.B.D radar_msgs/RadarTracks Tracks from radar driver. Radar Scan T.B.D radar_msgs/RadarScan Scan from radar driver. Lidar Point Cloud /sensing/lidar/<lidar name>/pointcloud_raw sensor_msgs/PointCloud2 Pointcloud from lidar driver. GNSS-INS Position T.B.D geometry_msgs/NavSatFix Initial pose from GNSS driver. GNSS-INS Orientation T.B.D autoware_sensing_msgs/GnssInsOrientationStamped Initial orientation from GNSS driver. GNSS Velocity T.B.D geometry_msgs/TwistWithCovarianceStamped Initial velocity from GNSS driver. GNSS Acceleration T.B.D geometry_msgs/AccelWithCovarianceStamped Initial acceleration from GNSS driver."},{"location":"design/autoware-interfaces/components/sensing/#output","title":"Output","text":"Name Topic Type Description Ultrasonics T.B.D sensor_msgs/Range Distance data from ultrasonic radar. Used by the Perception. Camera Image T.B.D sensor_msgs/Image Image data from camera. Used by the Perception. Combined Radar Tracks T.B.D radar_msgs/RadarTracks.msg Radar tracks from radar. Used by the Perception. Radar Point Cloud T.B.D radar_msgs/RadarScan.msg Pointcloud from radar. Used by the Perception. Lidar Point Cloud /sensing/lidar/<lidar group>/pointcloud sensor_msgs/PointCloud2 Lidar pointcloud after preprocessing. Used by the Perception and Localization. <lidar group> is a unique name for identifying each LiDAR or the group name when multiple LiDARs are combined. Specifically, the concatenated point cloud of all LiDARs is assigned the <lidar group> name concatenated. GNSS-INS pose T.B.D geometry_msgs/PoseWithCovarianceStamped Initial pose of the ego vehicle from GNSS. Used by the Localization. GNSS-INS Orientation T.B.D sensor_msgs/Imu Orientation info from GNSS. Used by the Localization. GNSS Velocity T.B.D geometry_msgs/TwistWithCovarianceStamped Velocity of the ego vehicle from GNSS. Used by the Localization. GNSS Acceleration T.B.D geometry_msgs/AccelWithCovarianceStamped Acceleration of the ego vehicle from GNSS. Used by the Localization."},{"location":"design/autoware-interfaces/components/vehicle-dimensions/","title":"Vehicle dimensions","text":""},{"location":"design/autoware-interfaces/components/vehicle-dimensions/#vehicle-dimensions","title":"Vehicle dimensions","text":""},{"location":"design/autoware-interfaces/components/vehicle-dimensions/#vehicle-axes-and-base_link","title":"Vehicle axes and base_link","text":"

    The base_link frame is used very frequently throughout the Autoware stack, and is a projection of the rear-axle center onto the ground surface.

    • Localization module outputs the map to base_link transformation.
    • Planning module plans the poses for where the base_link frame should be in the future.
    • Control module tries to fit base_link to incoming poses.
    "},{"location":"design/autoware-interfaces/components/vehicle-dimensions/#vehicle-dimensions_1","title":"Vehicle dimensions","text":""},{"location":"design/autoware-interfaces/components/vehicle-dimensions/#wheelbase","title":"wheelbase","text":"

    The distance between front and rear axles.

    "},{"location":"design/autoware-interfaces/components/vehicle-dimensions/#track_width","title":"track_width","text":"

    The distance between left and right wheels.

    "},{"location":"design/autoware-interfaces/components/vehicle-dimensions/#overhangs","title":"Overhangs","text":"

    Overhangs are part of the minimum safety box calculation.

    When measuring overhangs, side mirrors, protruding sensors and wheels should be taken into consideration.

    "},{"location":"design/autoware-interfaces/components/vehicle-dimensions/#left_overhang","title":"left_overhang","text":"

    The distance between the axis centers of the left wheels and the left-most point of the vehicle.

    "},{"location":"design/autoware-interfaces/components/vehicle-dimensions/#right_overhang","title":"right_overhang","text":"

    The distance between the axis centers of the right wheels and the right-most point of the vehicle.

    "},{"location":"design/autoware-interfaces/components/vehicle-dimensions/#front_overhang","title":"front_overhang","text":"

    The distance between the front axle and the foremost point of the vehicle.

    "},{"location":"design/autoware-interfaces/components/vehicle-dimensions/#rear_overhang","title":"rear_overhang","text":"

    The distance between the rear axle and the rear-most point of the vehicle.

    "},{"location":"design/autoware-interfaces/components/vehicle-dimensions/#vehicle_length","title":"vehicle_length","text":"

    Total length of the vehicle. Calculated by front_overhang + wheelbase + rear_overhang

    "},{"location":"design/autoware-interfaces/components/vehicle-dimensions/#vehicle_width","title":"vehicle_width","text":"

    Total width of the vehicle. Calculated by left_overhang + track_width + right_overhang

    "},{"location":"design/autoware-interfaces/components/vehicle-dimensions/#wheel-parameters","title":"Wheel parameters","text":""},{"location":"design/autoware-interfaces/components/vehicle-dimensions/#wheel_width","title":"wheel_width","text":"

    The lateral width of a wheel tire, primarily used for dead reckoning.

    "},{"location":"design/autoware-interfaces/components/vehicle-dimensions/#wheel_radius","title":"wheel_radius","text":"

    The radius of the wheel, primarily used for dead reckoning.

    "},{"location":"design/autoware-interfaces/components/vehicle-dimensions/#polygon_footprint","title":"polygon_footprint","text":"

    The polygon defines the minimum collision area for the vehicle.

    The points should be ordered clockwise, with the origin on the base_link.

    "},{"location":"design/autoware-interfaces/components/vehicle-dimensions/#wheel-orientations","title":"Wheel orientations","text":"

    If the vehicle is going forward, a positive wheel angle will result in the vehicle turning left.

    Autoware assumes the rear wheels don't turn on z axis.

    "},{"location":"design/autoware-interfaces/components/vehicle-dimensions/#notice","title":"Notice","text":"

    The vehicle used in the illustrations was created by xvlblo22 and is from https://www.turbosquid.com/3d-models/modular-sedan-3d-model-1590886.

    "},{"location":"design/autoware-interfaces/components/vehicle-interface/","title":"Vehicle Interface","text":""},{"location":"design/autoware-interfaces/components/vehicle-interface/#vehicle-interface","title":"Vehicle Interface","text":"

    This page describes the Vehicle Interface Component. Please refer to the Vehicle Interface design document for high-level concepts and data flow.

    The Vehicle Interface Component receives Vehicle commands and publishes Vehicle statuses. It communicates with the vehicle by the vehicle-specific protocol.

    The optional Vehicle command adapter converts generalized control command (target steering, steering rate, velocity, acceleration) into vehicle-specific control values (steering-torque, wheel-torque, voltage, pressure, acceleration pedal position, etc).

    "},{"location":"design/autoware-interfaces/components/vehicle-interface/#communication-with-the-vehicle","title":"Communication with the vehicle","text":"

    The interface to communicate with the vehicle varies between brands and models. For example a vehicle specific message protocol like CAN (Controller Area Network) with a ROS 2 interface (e.g., pacmod). In addition, an Autoware specific interface is often necessary (e.g., pacmod_interface).

    "},{"location":"design/autoware-interfaces/components/vehicle-interface/#vehicle-adapter","title":"Vehicle adapter","text":"

    Autoware's basic control command express the target motion of the vehicle in terms of speed, acceleration, steering angle, and steering rate. This may not be suitable for all vehicles and we thus distinguish between two types of vehicles.

    • Type 1: vehicle that is directly controlled by a subset of speed, acceleration, steering angle, and steering rate.
    • Type 2: vehicle that uses custom commands (motor torque, voltage, pedal pressure, etc).

    For vehicles of type 2, a vehicle adapter is necessary to convert the Autoware control command into the vehicle specific commands. For an example, see the raw_vehicle_cmd_converter which converts the target speed and steering angle to acceleration, steering, and brake mechanical inputs.

    "},{"location":"design/autoware-interfaces/components/vehicle-interface/#inputs-from-autoware","title":"Inputs from Autoware","text":"Name Topic Type Description Control command /control/command/control_cmd autoware_control_msgs/msg/Control Target controls of the vehicle (steering angle, velocity, ...) Control mode command /control/control_mode_request autoware_vehicle_msgs/srv/ControlModeCommand Request to switch between manual and autonomous driving Gear command /control/command/gear_cmd autoware_vehicle_msgs/msg/GearCommand Target gear of the vehicle Hazard lights command /control/command/hazard_lights_cmd autoware_vehicle_msgs/msg/HazardLightsCommand Target values of the hazard lights Turn indicator command /control/command/turn_indicators_cmd autoware_vehicle_msgs/msg/TurnIndicatorsCommand Target values of the turn signals"},{"location":"design/autoware-interfaces/components/vehicle-interface/#outputs-to-autoware","title":"Outputs to Autoware","text":"Name Topic Type Optional ? Description Actuation status /vehicle/status/actuation_status tier4_vehicle_msgs/msg/ActuationStatusStamped Yes (vehicle with mechanical inputs) Current acceleration, brake, and steer values reported by the vehicle Control mode /vehicle/status/control_mode autoware_vehicle_msgs/msg/ControlModeReport Current control mode (manual, autonomous, ...) Door status /vehicle/status/door_status tier4_api_msgs/msg/DoorStatus Yes Current door status Gear report /vehicle/status/gear_status autoware_vehicle_msgs/msg/GearReport Current gear of the vehicle Hazard light status /vehicle/status/hazard_lights_status autoware_vehicle_msgs/msg/HazardLightsReport Current hazard lights status Steering status /vehicle/status/steering_status autoware_vehicle_msgs/msg/SteeringReport Current steering angle of the steering tire Turn indicators status /vehicle/status/turn_indicators_status autoware_vehicle_msgs/msg/TurnIndicatorsReport Current state of the left and right turn indicators Velocity status /vehicle/status/velocity_status autoware_vehicle_msgs/msg/VelocityReport Current velocities of the vehicle (longitudinal, lateral, heading rate)"},{"location":"design/configuration-management/","title":"Configuration management","text":""},{"location":"design/configuration-management/#configuration-management","title":"Configuration management","text":"

    Warning

    Under Construction

    "},{"location":"design/configuration-management/development-process/","title":"Development process","text":""},{"location":"design/configuration-management/development-process/#development-process","title":"Development process","text":"

    Warning

    Under Construction

    "},{"location":"design/configuration-management/release-process/","title":"Release process","text":""},{"location":"design/configuration-management/release-process/#release-process","title":"Release process","text":"

    Warning

    Under Construction

    "},{"location":"design/configuration-management/repository-structure/","title":"Repository structure","text":""},{"location":"design/configuration-management/repository-structure/#repository-structure","title":"Repository structure","text":"

    Warning

    Under Construction

    "},{"location":"how-to-guides/","title":"How-to guides","text":""},{"location":"how-to-guides/#how-to-guides","title":"How-to guides","text":""},{"location":"how-to-guides/#integrating-autoware","title":"Integrating Autoware","text":"
    • Overview
    "},{"location":"how-to-guides/#training-machine-learning-models","title":"Training Machine Learning Models","text":"
    • Training and Deploying Models
    "},{"location":"how-to-guides/#others","title":"Others","text":"
    • Debug Autoware
    • Running Autoware without CUDA
    • Fixing dependent package versions
    • Add a custom ROS message
    • Determining component dependencies
    • Advanced usage of colcon
    • Applying Clang-Tidy to ROS packages
    • Defining temporal performance metrics on components
    • An example procedure for adding and evaluating a new node
    • Lane Detection Methods

    TODO: Write the following contents.

    • Create an Autoware package
    • etc.
    "},{"location":"how-to-guides/integrating-autoware/overview/","title":"Overview","text":""},{"location":"how-to-guides/integrating-autoware/overview/#overview","title":"Overview","text":""},{"location":"how-to-guides/integrating-autoware/overview/#requirement-prepare-your-real-vehicle-hardware","title":"Requirement: prepare your real vehicle hardware","text":"

    Prerequisites for the vehicle:

    • An onboard computer that satisfies the Autoware installation prerequisites
    • The following devices attached
      • Drive-by-wire interface
      • LiDAR
      • Optional: Inertial measurement unit
      • Optional: Camera
      • Optional: GNSS
    "},{"location":"how-to-guides/integrating-autoware/overview/#1-creating-your-autoware-meta-repository","title":"1. Creating your Autoware meta-repository","text":"

    Create your Autoware meta-repository. One easy way is to fork autowarefoundation/autoware and clone it. For how to fork a repository, refer to GitHub Docs.

    git clone https://github.com/YOUR_NAME/autoware.git\n

    If you set up multiple types of vehicles, adding a suffix like \"autoware.vehicle_A\" or \"autoware.vehicle_B\" is recommended.

    "},{"location":"how-to-guides/integrating-autoware/overview/#2-creating-the-your-vehicle-and-sensor-description","title":"2. Creating the your vehicle and sensor description","text":"

    Next, you need to create description packages that define the vehicle and sensor configuration of your vehicle.

    Create the following two packages:

    • YOUR_VEHICLE_launch (see here for example)
    • YOUR_SENSOR_KIT_launch (see here for example)

    Once created, you need to update the autoware.repos file of your cloned Autoware repository to refer to these two description packages.

    -  # sensor_kit\n-  sensor_kit/sample_sensor_kit_launch:\n-    type: git\n-    url: https://github.com/autowarefoundation/sample_sensor_kit_launch.git\n-    version: main\n-  # vehicle\n-  vehicle/sample_vehicle_launch:\n-    type: git\n-    url: https://github.com/autowarefoundation/sample_vehicle_launch.git\n-    version: main\n+  # sensor_kit\n+  sensor_kit/YOUR_SENSOR_KIT_launch:\n+    type: git\n+    url: https://github.com/YOUR_NAME/YOUR_SENSOR_KIT_launch.git\n+    version: main\n+  # vehicle\n+  vehicle/YOUR_VEHICLE_launch:\n+    type: git\n+    url: https://github.com/YOUR_NAME/YOUR_VEHICLE_launch.git\n+    version: main\n
    "},{"location":"how-to-guides/integrating-autoware/overview/#adapt-your_vehicle_launch-for-autoware-launching-system","title":"Adapt YOUR_VEHICLE_launch for autoware launching system","text":""},{"location":"how-to-guides/integrating-autoware/overview/#at-your_vehicle_description","title":"At YOUR_VEHICLE_description","text":"

    Define URDF and parameters in the vehicle description package (refer to the sample vehicle description package for an example).

    "},{"location":"how-to-guides/integrating-autoware/overview/#at-your_vehicle_launch","title":"At YOUR_VEHICLE_launch","text":"

    Create a launch file (refer to the sample vehicle launch package for example). If you have multiple vehicles with the same hardware setup, you can specify vehicle_id to distinguish them.

    "},{"location":"how-to-guides/integrating-autoware/overview/#adapt-your_sensor_kit_launch-for-autoware-launching-system","title":"Adapt YOUR_SENSOR_KIT_launch for autoware launching system","text":""},{"location":"how-to-guides/integrating-autoware/overview/#at-your_sensor_kit_description","title":"At YOUR_SENSOR_KIT_description","text":"

    Define URDF and extrinsic parameters for all the sensors here (refer to the sample sensor kit description package for example). Note that you need to calibrate extrinsic parameters for all the sensors beforehand.

    "},{"location":"how-to-guides/integrating-autoware/overview/#at-your_sensor_kit_launch","title":"At YOUR_SENSOR_KIT_launch","text":"

    Create launch/sensing.launch.xml that launches the interfaces of all the sensors on the vehicle. (refer to the sample sensor kit launch package for example).

    Note

    At this point, you are now able to run Autoware's Planning Simulator to do a basic test of your vehicle and sensing packages. To do so, you need to build and install Autoware using your cloned repository. Follow the steps for either Docker or source installation (starting from the dependency installation step) and then run the following command:

    ros2 launch autoware_launch planning_simulator.launch.xml vehicle_model:=YOUR_VEHICLE sensor_kit:=YOUR_SENSOR_KIT map_path:=/PATH/TO/YOUR/MAP\n
    "},{"location":"how-to-guides/integrating-autoware/overview/#3-create-a-vehicle_interface-package","title":"3. Create a vehicle_interface package","text":"

    You need to create an interface package for your vehicle. The package is expected to provide the following two functions.

    1. Receive command messages from vehicle_cmd_gate and drive the vehicle accordingly
    2. Send vehicle status information to Autoware

    You can find detailed information about the requirements of the vehicle_interface package in the Vehicle Interface design documentation. You can also refer to TIER IV's pacmod_interface repository as an example of a vehicle interface package.

    "},{"location":"how-to-guides/integrating-autoware/overview/#4-create-maps","title":"4. Create maps","text":"

    You need both a pointcloud map and a vector map in order to use Autoware. For more information on map design, please click here.

    "},{"location":"how-to-guides/integrating-autoware/overview/#create-a-pointcloud-map","title":"Create a pointcloud map","text":"

    Use third-party tools such as a LiDAR-based SLAM (Simultaneous Localization And Mapping) package to create a pointcloud map in the .pcd format. For more information, please click here.

    "},{"location":"how-to-guides/integrating-autoware/overview/#create-vector-map","title":"Create vector map","text":"

    Use third-party tools such as TIER IV's Vector Map Builder to create a Lanelet2 format .osm file.

    "},{"location":"how-to-guides/integrating-autoware/overview/#5-launch-autoware","title":"5. Launch Autoware","text":"

    This section briefly explains how to run your vehicle with Autoware.

    "},{"location":"how-to-guides/integrating-autoware/overview/#install-autoware","title":"Install Autoware","text":"

    Follow the installation steps of Autoware.

    "},{"location":"how-to-guides/integrating-autoware/overview/#launch-autoware","title":"Launch Autoware","text":"

    Launch Autoware with the following command:

    ros2 launch autoware_launch autoware.launch.xml vehicle_model:=YOUR_VEHICLE sensor_kit:=YOUR_SENSOR_KIT map_path:=/PATH/TO/YOUR/MAP\n
    "},{"location":"how-to-guides/integrating-autoware/overview/#set-initial-pose","title":"Set initial pose","text":"

    If GNSS is available, Autoware automatically initializes the vehicle's pose.

    If not, you need to set the initial pose using the RViz GUI.

    1. Click the 2D Pose estimate button in the toolbar, or hit the P key
    2. In the 3D View pane, click and hold the left mouse button, and then drag to set the direction for the initial pose.
    "},{"location":"how-to-guides/integrating-autoware/overview/#set-goal-pose","title":"Set goal pose","text":"

    Set a goal pose for the ego vehicle.

    1. Click the 2D Nav Goal button in the toolbar, or hit the G key
    2. In the 3D View pane, click and hold the left mouse button, and then drag to set the direction for the goal pose. If successful, you will see the calculated planning path on RViz.
    "},{"location":"how-to-guides/integrating-autoware/overview/#engage","title":"Engage","text":"

    In your terminal, execute the following command.

    source ~/autoware.YOURS/install/setup.bash\nros2 topic pub /autoware.YOURS/engage autoware_auto_vehicle_msgs/msg/Engage \"engage: true\" -1\n

    You can also engage via RViz with \"AutowareStatePanel\". The panel can be found in Panels > Add New Panel > tier4_state_rviz_plugin > AutowareStatePanel.

    Now the vehicle should drive along the calculated path!

    "},{"location":"how-to-guides/integrating-autoware/overview/#6-tune-parameters-for-your-vehicle-environment","title":"6. Tune parameters for your vehicle & environment","text":"

    You may need to tune your parameters depending on the domain in which you will operate your vehicle.

    The maximum velocity is defined here, that is 15km/h by default.

    "},{"location":"how-to-guides/integrating-autoware/awsim-integration/","title":"How to integrate your vehicle in AWSIM environment","text":""},{"location":"how-to-guides/integrating-autoware/awsim-integration/#how-to-integrate-your-vehicle-in-awsim-environment","title":"How to integrate your vehicle in AWSIM environment","text":""},{"location":"how-to-guides/integrating-autoware/awsim-integration/#overview","title":"Overview","text":"

    AWSIM is an open-source simulator designed by TIER IV for training and evaluating autonomous driving systems. It provides a realistic virtual environment for simulating various real-world scenarios, enabling users to test and refine their autonomous systems before deployment on actual vehicles.

    "},{"location":"how-to-guides/integrating-autoware/awsim-integration/#setup-unity-project","title":"Setup Unity Project","text":"

    To add your environment and vehicle to the AWSIM simulation, you need to set up the Unity environment on your computer. Please follow the steps on the Setup Unity Project documentation page to set up the Unity environment on your computer.

    AWSIM Unity Setup"},{"location":"how-to-guides/integrating-autoware/awsim-integration/#new-vehicle-integration","title":"New Vehicle Integration","text":"

    To incorporate your vehicle into the AWSIM environment, you'll need a 3D model file (.dae, .fbx) of your vehicle. Please refer to the steps on the Add New Vehicle documentation page to add your own vehicle to the AWSIM project environment. During these steps, you'll configure your sensor URDF design on your vehicle. Our tutorial vehicle is shown in the AWSIM environment in the following image.

    Tutorial vehicle in AWSIM Unity Environment"},{"location":"how-to-guides/integrating-autoware/awsim-integration/#environment-integration","title":"Environment Integration","text":"

    Creating custom 3D environments for AWSIM is feasible, but it's recommended to adhere to the .fbx file format. Materials and textures should be stored in separate directories for seamless integration with Unity. This format facilitates material importation and replacement during import. Please refer to the steps on the Add Environment documentation page to add your custom environment to the AWSIM project environment.

    Tutorial vehicle AWSIM Unity Environment"},{"location":"how-to-guides/integrating-autoware/awsim-integration/#others","title":"Others","text":"

    Additionally, you can incorporate traffic and NPCs, generate point cloud maps using lanelet2 maps, and perform other tasks by following the relevant documentation steps provided in the AWSIM documentation.

    "},{"location":"how-to-guides/integrating-autoware/creating-maps/","title":"Creating maps","text":""},{"location":"how-to-guides/integrating-autoware/creating-maps/#creating-maps","title":"Creating maps","text":"

    Autoware requires a pointcloud map and a vector map for the vehicle's operating environment. (Check the map design documentation page for the detailed specification).

    This page explains how users can create maps that can be used for Autoware.

    "},{"location":"how-to-guides/integrating-autoware/creating-maps/#creating-a-point-cloud-map","title":"Creating a point cloud map","text":"

    Traditionally, a Mobile Mapping System (MMS) is used in order to create highly accurate large-scale point cloud maps. However, since a MMS requires high-end sensors for precise positioning, its operational cost can be very expensive and may not be suitable for a relatively small driving environment. Alternatively, a Simultaneous Localization And Mapping (SLAM) algorithm can be used to create a point cloud map from recorded LiDAR scans. Some of the useful open-source SLAM implementations are listed in this page.

    If you prefer proprietary software that is easy to use, you can try a fully automatic mapping tool from MAP IV, Inc., MapIV Engine. They currently provide a trial license for Autoware users free of charge.

    "},{"location":"how-to-guides/integrating-autoware/creating-maps/#creating-a-vector-map","title":"Creating a vector map","text":""},{"location":"how-to-guides/integrating-autoware/creating-maps/#quick-way-to-create-simple-maps","title":"Quick way to create simple maps","text":"

    bag2lanelet is a tool to create virtual lanes from self-location data. Whether in a real environment or a simulation, you can use this tool to generate simple lanelets from a rosbag with self-location information, allowing you to quickly test Autoware's performance. A key use case for this tool is to emulate autonomous driving on routes that were initially driven manually.

    However, it is important to note that this tool has very limited functionalities and can only generate single-lane maps. To enable more comprehensive mapping and navigation capabilities, we recommend using the 'Vector Map Builder' described in the next section.

    "},{"location":"how-to-guides/integrating-autoware/creating-maps/#tools-for-a-vector-map-creation","title":"Tools for a vector map creation","text":"

    The recommended way to create an Autoware-compatible vector map is to use Vector Map Builder, a free web-based tool provided by TIER IV, Inc.. Vector Map Builder allows you to create lanes and add additional regulatory elements such as stop signs or traffic lights using a point cloud map as a reference.

    For open-source software options, MapToolbox is a plugin for Unity specifically designed to create Lanelet2 maps for Autoware. Although JOSM is another open-source tool that can be used to create Lanelet2 maps, be aware that a number of modifications must be done manually to make the map compatible with Autoware. This process can be tedious and time-consuming, so the use of JOSM is not recommended.

    "},{"location":"how-to-guides/integrating-autoware/creating-maps/#autoware-compatible-map-providers","title":"Autoware-compatible map providers","text":"

    If it is not possible to create HD maps yourself, you can use a mapping service from the following Autoware-compatible map providers instead:

    • MAP IV, Inc.
    • AISAN TECHNOLOGY CO., LTD.
    • TomTom

    The table below shows each company's mapping technology and the types of HD maps they support.

    Company Mapping technology Available maps MAP IV, Inc. SLAM Point cloud and vector maps AISAN TECHNOLOGY CO., LTD. MMS Point cloud and vector maps TomTom MMS Vector map*

    Note

    Maps provided by TomTom use their proprietary AutoStream format, not Lanelet2. The open-source AutoStreamForAutoware tool can be used to convert an AutoStream map to a Lanelet2 map. However, the converter is still in its early stages and has some known limitations.

    "},{"location":"how-to-guides/integrating-autoware/creating-maps/converting-utm-to-mgrs-map/","title":"Converting UTM maps to MGRS map format","text":""},{"location":"how-to-guides/integrating-autoware/creating-maps/converting-utm-to-mgrs-map/#converting-utm-maps-to-mgrs-map-format","title":"Converting UTM maps to MGRS map format","text":""},{"location":"how-to-guides/integrating-autoware/creating-maps/converting-utm-to-mgrs-map/#overview","title":"Overview","text":"

    If you want to use MGRS (Military Grid Reference System) format in Autoware, you need to convert UTM (Universal Transverse Mercator) map to MGRS format. In order to do that, we will use UTM to MGRS pointcloud converter ROS 2 package provided by Leo Drive.

    "},{"location":"how-to-guides/integrating-autoware/creating-maps/converting-utm-to-mgrs-map/#installation","title":"Installation","text":""},{"location":"how-to-guides/integrating-autoware/creating-maps/converting-utm-to-mgrs-map/#dependencies","title":"Dependencies","text":"
    • ROS 2
    • PCL-conversions
    • GeographicLib

    To install dependencies:

    sudo apt install ros-humble-pcl-conversions \\\ngeographiclib-tools\n
    "},{"location":"how-to-guides/integrating-autoware/creating-maps/converting-utm-to-mgrs-map/#building","title":"Building","text":"
        cd <PATH-TO-YOUR-ROS-2-WORKSPACE>/src\n    git clone https://github.com/leo-drive/pc_utm_to_mgrs_converter.git\n    cd ..\n    colcon build --symlink-install --cmake-args -DCMAKE_BUILD_TYPE=Release\n
    "},{"location":"how-to-guides/integrating-autoware/creating-maps/converting-utm-to-mgrs-map/#usage","title":"Usage","text":"

    After the installation of converter tool, we need to define northing, easting and ellipsoid height of local UTM map origin in pc_utm_to_mgrs_converter.param.yaml. For example, you can use latitude, longitude and altitude values in the navsatfix message from your GNSS/INS sensor.

    Sample ROS 2 topic echo from navsatfix message
    header:\nstamp:\nsec: 1694612439\nnanosec: 400000000\nframe_id: GNSS_INS/gnss_ins_link\nstatus:\nstatus: 0\nservice: 1\nlatitude: 41.0216110801253\nlongitude: 28.887096461148346\naltitude: 74.28264078891529\nposition_covariance:\n- 0.0014575386885553598\n- 0.0\n- 0.0\n- 0.0\n- 0.004014162812381983\n- 0.0\n- 0.0\n- 0.0\n- 0.0039727711118757725\nposition_covariance_type: 2\n

    After that, you need to convert latitude and longitude values to northing and easting values. You can use any converter on the internet for converting latitude longitude values to UTM. (i.e., UTMconverter)

    Now, we are ready to update pc_utm_to_mgrs_converter.param.yaml, example for our navsatfix message:

    /**:\n  ros__parameters:\n      # Northing of local origin\n-     Northing: 4520550.0\n+     Northing: 4542871.33\n\n     # Easting of local origin\n-     Easting: 698891.0\n+     Easting: 658659.84\n\n     # Elipsoid Height of local origin\n-     ElipsoidHeight: 47.62\n+     ElipsoidHeight: 74.28\n

    Lastly, we will update input and pointcloud the map path in pc_utm_to_mgrs_converter.launch.xml:

    ...\n- <arg name=\"input_file_path\" default=\"/home/melike/projects/autoware_data/gebze_pospac_map/pointcloud_map.pcd\"/>\n+ <arg name=\"input_file_path\" default=\"<PATH-TO-YOUR-INPUT-PCD-MAP>\"/>\n- <arg name=\"output_file_path\" default=\"/home/melike/projects/autoware_data/gebze_pospac_map/pointcloud_map_mgrs_orto.pcd\"/>\n+ <arg name=\"output_file_path\" default=\"<PATH-TO-YOUR-OUTPUT-PCD-MAP>\"/>\n...\n

    After the setting of the package, we will launch pc_utm_to_mgrs_converter:

    ros2 launch pc_utm_to_mgrs_converter pc_utm_to_mgrs_converter.launch.xml\n

    The conversion process will be started, you should see Saved <YOUR-MAP-POINTS-SIZE> data points saved to <YOUR-OUTPUT-MAP-PATH> message on your terminal. MGRS format pointcloud map should be saved on your output map directory.

    "},{"location":"how-to-guides/integrating-autoware/creating-maps/creating-vector-map/","title":"Creating a vector map","text":""},{"location":"how-to-guides/integrating-autoware/creating-maps/creating-vector-map/#creating-a-vector-map","title":"Creating a vector map","text":""},{"location":"how-to-guides/integrating-autoware/creating-maps/creating-vector-map/#overview","title":"Overview","text":"

    In this section, we will explain how to create Lanelet2 maps with TIER IV's Vector Map Builder tool.

    There are alternative tools such as Unity-based app MapToolbox and Java-based app JOSM that you may use for creating a Lanelet2 map. We will be using TIER IV's Vector Map Builder in the tutorial since it works on a browser without installation of extra dependency applications.

    "},{"location":"how-to-guides/integrating-autoware/creating-maps/creating-vector-map/#vector-map-builder","title":"Vector Map Builder","text":"

    You need a TIER IV account for using Vector Map Builder tool. If it is the first time to use the tool, create a TIER IV account in order to use Vector Map Builder tool. For more information about this tool, please check the official guide.

    You can follow these pages for creating a Lanelet2 map and understanding its regulatory elements.

    • Lanelet2
    • Crosswalk
    • Stop Line
    • Traffic Light
    • Speed Bump
    • Detection Area
    "},{"location":"how-to-guides/integrating-autoware/creating-maps/creating-vector-map/crosswalk/","title":"Crosswalk attribute","text":""},{"location":"how-to-guides/integrating-autoware/creating-maps/creating-vector-map/crosswalk/#crosswalk-attribute","title":"Crosswalk attribute","text":"

    Behavior velocity planner's crosswalk module plans velocity to stop or decelerate for pedestrians approaching or walking on a crosswalk. In order to operate that, we will add crosswalk attribute to our lanelet2 map.

    "},{"location":"how-to-guides/integrating-autoware/creating-maps/creating-vector-map/crosswalk/#creating-a-crosswalk-attribute","title":"Creating a crosswalk attribute","text":"

    In order to create a crosswalk on your map, please follow these steps:

    1. Click Abstraction button on top panel.
    2. Select Crosswalk from the panel.
    3. Click and draw crosswalk on your pointcloud map.

    You can see these steps in the crosswalk creating demonstration video:

    "},{"location":"how-to-guides/integrating-autoware/creating-maps/creating-vector-map/crosswalk/#testing-created-crosswalk-with-planning-simulator","title":"Testing created crosswalk with planning simulator","text":"

    After the completing of creating the map, we need to save it. To that please click File --> Export Lanelet2Maps then download.

    After the download is finished, we need to put lanelet2 map and pointcloud map on the same location. The directory structure should be like this:

    + <YOUR-MAP-DIRECTORY>/\n+  \u251c\u2500 pointcloud_map.pcd\n+  \u2514\u2500 lanelet2_map.osm\n

    If your .osm or .pcd map file's name is different from these names, you need to update autoware.launch.xml:

      <!-- Map -->\n-  <arg name=\"lanelet2_map_file\" default=\"lanelet2_map.osm\" description=\"lanelet2 map file name\"/>\n+  <arg name=\"lanelet2_map_file\" default=\"<YOUR-LANELET-MAP-NAME>.osm\" description=\"lanelet2 map file name\"/>\n-  <arg name=\"pointcloud_map_file\" default=\"pointcloud_map.pcd\" description=\"pointcloud map file name\"/>\n+  <arg name=\"pointcloud_map_file\" default=\"<YOUR-POINTCLOUD-MAP-NAME>.pcd\" description=\"pointcloud map file name\"/>\n

    Now we are ready to launch the planning simulator:

    ros2 launch autoware_launch planning_simulator.launch.xml map_path:=<YOUR-MAP-FOLDER-DIR> vehicle_model:=<YOUR-VEHICLE-MODEL> sensor_model:=<YOUR-SENSOR-KIT>\n

    Example for tutorial_vehicle:

    ros2 launch autoware_launch planning_simulator.launch.xml map_path:=$HOME/Files/autoware_map/tutorial_map/ vehicle_model:=tutorial_vehicle sensor_model:=tutorial_vehicle_sensor_kit vehicle_id:=tutorial_vehicle\n
    1. Click 2D Pose Estimate button on rviz or press P and give a pose for initialization.
    2. Click 2D Goal Pose button on rviz or press G and give a pose for goal point.
    3. We need to add pedestrians to crosswalk, so activate interactive pedestrians from Tool Properties panel on rviz.
    4. After that, please press Shift, then click right click button for inserting pedestrians.
    5. You can control inserted pedestrian via dragging right click.

    Crosswalk markers on rviz:

    Crosswalk test on the created map.

    You can check your crosswalk elements in the planning simulator as this demonstration video:

    "},{"location":"how-to-guides/integrating-autoware/creating-maps/creating-vector-map/detection-area/","title":"Detection area element","text":""},{"location":"how-to-guides/integrating-autoware/creating-maps/creating-vector-map/detection-area/#detection-area-element","title":"Detection area element","text":"

    Behavior velocity planner's detection area plans velocity when if pointcloud is detected in a detection area defined on a map, the stop planning will be executed at the predetermined point. In order to operate that, we will add a detection area element to our lanelet2 map.

    "},{"location":"how-to-guides/integrating-autoware/creating-maps/creating-vector-map/detection-area/#creating-a-detection-area-element","title":"Creating a detection area element","text":"

    In order to create a detection area on your map, please follow these steps:

    1. Click Lanelet2Maps button on top panel.
    2. Select Detection Area from the panel.
    3. Please select lanelet which stop line to be added.
    4. Click and insert Detection Area on your pointcloud map.
    5. You can change the dimensions of the detection area with clicking points on the corners of the detection area. For more information, you can check the demonstration video.

    You can see these steps in the detection area creating demonstration video:

    "},{"location":"how-to-guides/integrating-autoware/creating-maps/creating-vector-map/detection-area/#testing-created-detection-area-with-planning-simulator","title":"Testing created detection area with planning simulator","text":"

    After the completing of creating the map, we need to save it. To that please click File --> Export Lanelet2Maps then download.

    After the download is finished, we need to put lanelet2 map and pointcloud map on the same location. The directory structure should be like this:

    + <YOUR-MAP-DIRECTORY>/\n+  \u251c\u2500 pointcloud_map.pcd\n+  \u2514\u2500 lanelet2_map.osm\n

    If your .osm or .pcd map file's name is different from these names, you need to update autoware.launch.xml:

      <!-- Map -->\n-  <arg name=\"lanelet2_map_file\" default=\"lanelet2_map.osm\" description=\"lanelet2 map file name\"/>\n+  <arg name=\"lanelet2_map_file\" default=\"<YOUR-LANELET-MAP-NAME>.osm\" description=\"lanelet2 map file name\"/>\n-  <arg name=\"pointcloud_map_file\" default=\"pointcloud_map.pcd\" description=\"pointcloud map file name\"/>\n+  <arg name=\"pointcloud_map_file\" default=\"<YOUR-POINTCLOUD-MAP-NAME>.pcd\" description=\"pointcloud map file name\"/>\n

    Now we are ready to launch the planning simulator:

    ros2 launch autoware_launch planning_simulator.launch.xml map_path:=<YOUR-MAP-FOLDER-DIR> vehicle_model:=<YOUR-VEHICLE-MODEL> sensor_model:=<YOUR-SENSOR-KIT>\n

    Example for tutorial_vehicle:

    ros2 launch autoware_launch planning_simulator.launch.xml map_path:=$HOME/Files/autoware_map/tutorial_map/ vehicle_model:=tutorial_vehicle sensor_model:=tutorial_vehicle_sensor_kit vehicle_id:=tutorial_vehicle\n
    1. Click 2D Pose Estimate button on rviz or press P and give a pose for initialization.
    2. Click 2D Goal Pose button on rviz or press G and give a pose for goal point.
    3. We need to add pedestrians to detection area, so activate interactive pedestrians from Tool Properties panel on rviz.
    4. After that, please press Shift, then click right click button for inserting pedestrians.
    5. You can control inserted pedestrian via dragging right click. So, you should put pedestrian on the detection area for testing.

    Stop detection area on rviz:

    Detection area test on the created map.

    You can check your detection area elements in the planning simulator as this demonstration video:

    "},{"location":"how-to-guides/integrating-autoware/creating-maps/creating-vector-map/lanelet2/","title":"Creating a Lanelet","text":""},{"location":"how-to-guides/integrating-autoware/creating-maps/creating-vector-map/lanelet2/#creating-a-lanelet","title":"Creating a Lanelet","text":"

    At this page, we will explain how to create a simple lanelet on your point cloud map. If you didn't have a point cloud map before, please check and follow the steps on the LIO-SAM mapping page for how to create a point cloud map for Autoware.

    "},{"location":"how-to-guides/integrating-autoware/creating-maps/creating-vector-map/lanelet2/#creating-a-lanelet2","title":"Creating a Lanelet2","text":"

    Firstly, we need to import our pointcloud map to Vector Map Builder tool:

    1. Please click File.
    2. Then, click Import PCD.
    3. Click Browse and select your .pcd file.

    You will display the point cloud on your Vector Map Builder tool after the upload is complete:

    Uploaded pointcloud map file on Vector Map Builder

    Now, we are ready to create lanelet2 map on our pointcloud map:

    1. Please click Create.
    2. Then, click Create Lanelet2Maps.
    3. Please fill your map name
    4. Please fill your MGRS zone. (At tutorial_vehicle, MGRS grid zone: 35T - MGRS 100,000-meter square: PF)
    5. Click Create.
    "},{"location":"how-to-guides/integrating-autoware/creating-maps/creating-vector-map/lanelet2/#creating-a-simple-lanelet","title":"Creating a simple lanelet","text":"

    In order to create a simple lanelet on your map, please follow these steps:

    1. CLick Lanelet2Maps on the bar
    2. Enable Lanelet mode via selecting Lanelet.
    3. Then, you can click the pointcloud map to create lanelet.
    4. If your lanelet is finished, you can disable Lanelet.
    5. If you want to change your lanelet width, click lanelet --> Change Lanelet Width, then you can enter the lanelet width.

    Video Demonstration:

    "},{"location":"how-to-guides/integrating-autoware/creating-maps/creating-vector-map/lanelet2/#join-two-lanelets","title":"Join two lanelets","text":"

    In order to join two lanelets, please follow these steps:

    1. Please create two distinct lanelet.
    2. Select a Lanelet, then press Shift and select other lanelet.
    3. Now, you can see Join Lanelets button, just press it.
    4. These lanelets will be joined.

    Video Demonstration:

    "},{"location":"how-to-guides/integrating-autoware/creating-maps/creating-vector-map/lanelet2/#join-multiple-lanelets","title":"Join Multiple lanelets","text":"

    In order to add (join) two or more lanelets to another lanelet, please follow these steps:

    1. Create multiple lanelets.
    2. You can join the first two lanelets like the steps before.
    3. Please check end points ids of first lanelet.
    4. Then you need to change these ids with third lanelet's start point. (Please change with selecting linestring of lanelet)
    5. You will see two next lanes of the first lanelet will be appeared.

    Video Demonstration:

    "},{"location":"how-to-guides/integrating-autoware/creating-maps/creating-vector-map/lanelet2/#change-speed-limit-of-lanelet","title":"Change Speed Limit Of Lanelet","text":"

    In order to change the speed limit of lanelet, please follow these steps:

    1. Select the lanelet where the speed limit will be changed
    2. Set speed limit on the right panel.
    "},{"location":"how-to-guides/integrating-autoware/creating-maps/creating-vector-map/lanelet2/#test-lanelets-with-planning-simulator","title":"Test lanelets with planning simulator","text":"

    After the completing of creating lanelets, we need to save it. To that please click File --> Export Lanelet2Maps then download.

    After the download is finished, we need to put lanelet2 map and pointcloud map on the same location. The directory structure should be like this:

    <YOUR-MAP-DIRECTORY>/\n \u251c\u2500 pointcloud_map.pcd\n \u2514\u2500 lanelet2_map.osm\n

    If your .osm or .pcd map file's name is different from these names, you need to update autoware.launch.xml:

      <!-- Map -->\n-  <arg name=\"lanelet2_map_file\" default=\"lanelet2_map.osm\" description=\"lanelet2 map file name\"/>\n+  <arg name=\"lanelet2_map_file\" default=\"<YOUR-LANELET-MAP-NAME>.osm\" description=\"lanelet2 map file name\"/>\n-  <arg name=\"pointcloud_map_file\" default=\"pointcloud_map.pcd\" description=\"pointcloud map file name\"/>\n+  <arg name=\"pointcloud_map_file\" default=\"<YOUR-POINTCLOUD-MAP-NAME>.pcd\" description=\"pointcloud map file name\"/>\n

    Now we are ready to launch the planning simulator:

    ros2 launch autoware_launch planning_simulator.launch.xml map_path:=<YOUR-MAP-FOLDER-DIR> vehicle_model:=<YOUR-VEHICLE-MODEL> sensor_model:=<YOUR-SENSOR-KIT>\n

    Example for tutorial_vehicle:

    ros2 launch autoware_launch planning_simulator.launch.xml map_path:=$HOME/Files/autoware_map/tutorial_map/ vehicle_model:=tutorial_vehicle sensor_model:=tutorial_vehicle_sensor_kit vehicle_id:=tutorial_vehicle\n
    1. Click 2D Pose Estimate button on rviz or press P and give a pose for initialization.
    2. Click 2D Goal Pose button on rviz or press G and give a pose for goal point.

    Testing our created vector map with planning simulator"},{"location":"how-to-guides/integrating-autoware/creating-maps/creating-vector-map/speed-bump/","title":"Speed bump","text":""},{"location":"how-to-guides/integrating-autoware/creating-maps/creating-vector-map/speed-bump/#speed-bump","title":"Speed bump","text":"

    Behavior velocity planner's speed bump module plans velocity to slow down before speed bump for comfortable and safety driving. In order to operate that, we will add speed bumps to our lanelet2 map.

    "},{"location":"how-to-guides/integrating-autoware/creating-maps/creating-vector-map/speed-bump/#creating-a-speed-bump-element","title":"Creating a speed bump element","text":"

    In order to create a speed bump on your pointcloud map, please follow these steps:

    1. Select Linestring from Lanelet2Maps section.
    2. Click and draw polygon for speed bump.
    3. Then please disable Linestring from Lanelet2Maps section.
    4. CLick Change to Polygon from the Action panel.
    5. Please select this Polygon and enter speed_bump as the type.
    6. Then, please click lanelet which speed bump to be added.
    7. Select Create General Regulatory ELement.
    8. Go to this element, and please enter speed_bump as subtype.
    9. Click Add refers and type your created speed bump polygon ID.

    You can see these steps in the speed bump creating demonstration video:

    "},{"location":"how-to-guides/integrating-autoware/creating-maps/creating-vector-map/speed-bump/#testing-created-the-speed-bump-element-with-planning-simulator","title":"Testing created the speed bump element with planning simulator","text":"

    After the completing of creating the map, we need to save it. To that please click File --> Export Lanelet2Maps then download.

    After the download is finished, we need to put lanelet2 map and pointcloud map on the same location. The directory structure should be like this:

    + <YOUR-MAP-DIRECTORY>/\n+  \u251c\u2500 pointcloud_map.pcd\n+  \u2514\u2500 lanelet2_map.osm\n

    If your .osm or .pcd map file's name is different from these names, you need to update autoware.launch.xml:

      <!-- Map -->\n-  <arg name=\"lanelet2_map_file\" default=\"lanelet2_map.osm\" description=\"lanelet2 map file name\"/>\n+  <arg name=\"lanelet2_map_file\" default=\"<YOUR-LANELET-MAP-NAME>.osm\" description=\"lanelet2 map file name\"/>\n-  <arg name=\"pointcloud_map_file\" default=\"pointcloud_map.pcd\" description=\"pointcloud map file name\"/>\n+  <arg name=\"pointcloud_map_file\" default=\"<YOUR-POINTCLOUD-MAP-NAME>.pcd\" description=\"pointcloud map file name\"/>\n

    Note

    The speed bump module not enabled default. To enable that, please uncomment it your behavior_velocity_planner.param.yaml.

    Now we are ready to launch the planning simulator:

    ros2 launch autoware_launch planning_simulator.launch.xml map_path:=<YOUR-MAP-FOLDER-DIR> vehicle_model:=<YOUR-VEHICLE-MODEL> sensor_model:=<YOUR-SENSOR-KIT>\n

    Example for tutorial_vehicle:

    ros2 launch autoware_launch planning_simulator.launch.xml map_path:=$HOME/Files/autoware_map/tutorial_map/ vehicle_model:=tutorial_vehicle sensor_model:=tutorial_vehicle_sensor_kit vehicle_id:=tutorial_vehicle\n
    1. Click 2D Pose Estimate button on rviz or press P and give a pose for initialization.
    2. Click 2D Goal Pose button on rviz or press G and give a pose for goal point.
    3. You can see the speed bump marker on the rviz screen.

    Speed bump markers on rviz:

    Speed bump test on the created map.

    You can check your speed bump elements in the planning simulator as this demonstration video:

    "},{"location":"how-to-guides/integrating-autoware/creating-maps/creating-vector-map/stop-line/","title":"Stop Line","text":""},{"location":"how-to-guides/integrating-autoware/creating-maps/creating-vector-map/stop-line/#stop-line","title":"Stop Line","text":"

    Behavior velocity planner's stop line module plans velocity to stop right before stop lines and restart driving after stopped. In order to operate that, we will add stop line attribute to our lanelet2 map.

    "},{"location":"how-to-guides/integrating-autoware/creating-maps/creating-vector-map/stop-line/#creating-a-stop-line-regulatory-element","title":"Creating a stop line regulatory element","text":"

    In order to create a stop line on your pointcloud map, please follow these steps:

    1. Please select lanelet which stop line to be added.
    2. Click Abstraction button on top panel.
    3. Select Stop Line from the panel.
    4. Click on the desired area for inserting stop line.

    You can see these steps in the stop line creating demonstration video:

    "},{"location":"how-to-guides/integrating-autoware/creating-maps/creating-vector-map/stop-line/#testing-created-the-stop-line-element-with-planning-simulator","title":"Testing created the stop line element with planning simulator","text":"

    After the completing of creating the map, we need to save it. To that please click File --> Export Lanelet2Maps then download.

    After the download is finished, we need to put lanelet2 map and pointcloud map on the same location. The directory structure should be like this:

    + <YOUR-MAP-DIRECTORY>/\n+  \u251c\u2500 pointcloud_map.pcd\n+  \u2514\u2500 lanelet2_map.osm\n

    If your .osm or .pcd map file's name is different from these names, you need to update autoware.launch.xml:

      <!-- Map -->\n-  <arg name=\"lanelet2_map_file\" default=\"lanelet2_map.osm\" description=\"lanelet2 map file name\"/>\n+  <arg name=\"lanelet2_map_file\" default=\"<YOUR-LANELET-MAP-NAME>.osm\" description=\"lanelet2 map file name\"/>\n-  <arg name=\"pointcloud_map_file\" default=\"pointcloud_map.pcd\" description=\"pointcloud map file name\"/>\n+  <arg name=\"pointcloud_map_file\" default=\"<YOUR-POINTCLOUD-MAP-NAME>.pcd\" description=\"pointcloud map file name\"/>\n

    Now we are ready to launch the planning simulator:

    ros2 launch autoware_launch planning_simulator.launch.xml map_path:=<YOUR-MAP-FOLDER-DIR> vehicle_model:=<YOUR-VEHICLE-MODEL> sensor_model:=<YOUR-SENSOR-KIT>\n

    Example for tutorial_vehicle:

    ros2 launch autoware_launch planning_simulator.launch.xml map_path:=$HOME/Files/autoware_map/tutorial_map/ vehicle_model:=tutorial_vehicle sensor_model:=tutorial_vehicle_sensor_kit vehicle_id:=tutorial_vehicle\n
    1. Click 2D Pose Estimate button on rviz or press P and give a pose for initialization.
    2. Click 2D Goal Pose button on rviz or press G and give a pose for goal point.
    3. You can see the stop line marker on the rviz screen.

    Stop line markers on rviz:

    Stop line test on the created map.

    You can check your stop line elements in the planning simulator as this demonstration video:

    "},{"location":"how-to-guides/integrating-autoware/creating-maps/creating-vector-map/traffic-light/","title":"Traffic light","text":""},{"location":"how-to-guides/integrating-autoware/creating-maps/creating-vector-map/traffic-light/#traffic-light","title":"Traffic light","text":"

    Behavior velocity planner's traffic light module plans velocity according to the traffic light status. In order to operate that, we will add traffic light attribute to our lanelet2 map.

    "},{"location":"how-to-guides/integrating-autoware/creating-maps/creating-vector-map/traffic-light/#creating-a-traffic-light-regulatory-element","title":"Creating a traffic light regulatory element","text":"

    In order to create a traffic light on your pointcloud map, please follow these steps:

    1. Please select lanelet which traffic light to be added.
    2. Click Abstraction button on top panel.
    3. Select Traffic Light from the panel.
    4. Click on the desired area for inserting traffic light.

    You can see these steps in the traffic-light creating demonstration video:

    "},{"location":"how-to-guides/integrating-autoware/creating-maps/creating-vector-map/traffic-light/#testing-created-the-traffic-light-element-with-planning-simulator","title":"Testing created the traffic light element with planning simulator","text":"

    After the completing of creating the map, we need to save it. To that please click File --> Export Lanelet2Maps then download.

    After the download is finished, we need to put lanelet2 map and pointcloud map on the same location. The directory structure should be like this:

    + <YOUR-MAP-DIRECTORY>/\n+  \u251c\u2500 pointcloud_map.pcd\n+  \u2514\u2500 lanelet2_map.osm\n

    If your .osm or .pcd map file's name is different from these names, you need to update autoware.launch.xml:

      <!-- Map -->\n-  <arg name=\"lanelet2_map_file\" default=\"lanelet2_map.osm\" description=\"lanelet2 map file name\"/>\n+  <arg name=\"lanelet2_map_file\" default=\"<YOUR-LANELET-MAP-NAME>.osm\" description=\"lanelet2 map file name\"/>\n-  <arg name=\"pointcloud_map_file\" default=\"pointcloud_map.pcd\" description=\"pointcloud map file name\"/>\n+  <arg name=\"pointcloud_map_file\" default=\"<YOUR-POINTCLOUD-MAP-NAME>.pcd\" description=\"pointcloud map file name\"/>\n

    Now we are ready to launch the planning simulator:

    ros2 launch autoware_launch planning_simulator.launch.xml map_path:=<YOUR-MAP-FOLDER-DIR> vehicle_model:=<YOUR-VEHICLE-MODEL> sensor_model:=<YOUR-SENSOR-KIT>\n

    Example for tutorial_vehicle:

    ros2 launch autoware_launch planning_simulator.launch.xml map_path:=$HOME/Files/autoware_map/tutorial_map/ vehicle_model:=tutorial_vehicle sensor_model:=tutorial_vehicle_sensor_kit vehicle_id:=tutorial_vehicle\n
    1. Click 2D Pose Estimate button on rviz or press P and give a pose for initialization.
    2. Click Panels -> Add new panel, select TrafficLightPublishPanel, and then press OK.
    3. In TrafficLightPublishPanel, set the ID and color of the traffic light.
    4. Then, Click SET and PUBLISH button.
    5. Click 2D Goal Pose button on rviz or press G and give a pose for goal point.
    6. You can see the traffic light marker on the rviz screen if you set the traffic light color as RED.

    Traffic Light markers on rviz:

    Traffic light test on the created map.

    You can check your traffic light elements in the planning simulator as this demonstration video:

    "},{"location":"how-to-guides/integrating-autoware/creating-maps/open-source-slam/","title":"Available Open Source SLAM","text":""},{"location":"how-to-guides/integrating-autoware/creating-maps/open-source-slam/#available-open-source-slam","title":"Available Open Source SLAM","text":"

    This page provides the list of available open source Simultaneous Localization And Mapping (SLAM) implementation that can be used to generate a point cloud (.pcd) map file.

    "},{"location":"how-to-guides/integrating-autoware/creating-maps/open-source-slam/#selecting-which-implementation-to-use","title":"Selecting which implementation to use","text":"

    Lidar odometry drifts accumulatively as time goes by and there is solutions to solve that problem such as graph optimization, loop closure and using gps sensor to decrease accumulative drift error. Because of that, a SLAM algorithm should have loop closure feature, graph optimization and should use gps sensor. Additionally, some of the algorithms are using IMU sensor to add another factor to graph for decreasing drift error. While some of the algorithms requires 9-axis IMU sensor strictly, some of them requires only 6-axis IMU sensor or not even using the IMU sensor. Before choosing an algorithm to create maps for Autoware please consider these factors depends on your sensor setup or expected quality of generated map.

    "},{"location":"how-to-guides/integrating-autoware/creating-maps/open-source-slam/#tips","title":"Tips","text":"

    Commonly used open-source SLAM implementations are lidarslam-ros2 (LiDAR, IMU*) and LIO-SAM (LiDAR, IMU, GNSS). The required sensor data for each algorithm is specified in the parentheses, where an asterisk (*) indicates that such sensor data is optional. For supported LiDAR models, please check the GitHub repository of each algorithm. While these ROS 2-based SLAM implementations can be easily installed and used directly on the same machine that runs Autoware, it is important to note that they may not be as well-tested or as mature as ROS 1-based alternatives.

    The notable open-source SLAM implementations that are based on ROS 1 include hdl-graph-slam (LiDAR, IMU*, GNSS*), LeGO-LOAM (LiDAR, IMU*), LeGO-LOAM-BOR (LiDAR), and LIO-SAM (LiDAR, IMU, GNSS).

    Most of these algorithms already have a built-in loop-closure and pose graph optimization. However, if the built-in, automatic loop-closure fails or does not work correctly, you can use Interactive SLAM to adjust and optimize a pose graph manually.

    "},{"location":"how-to-guides/integrating-autoware/creating-maps/open-source-slam/#list-of-third-party-slam-implementations","title":"List of Third Party SLAM Implementations","text":"Package Name Explanation Repository Link Loop Closure Sensors ROS Version Dependencies FAST-LIO-LC A computationally efficient and robust LiDAR-inertial odometry package with loop closure module and graph optimization https://github.com/yanliang-wang/FAST_LIO_LC &check; LidarIMUGPS [Optional] ROS 1 ROS MelodicPCL >= 1.8Eigen >= 3.3.4GTSAM >= 4.0.0 FAST_LIO_SLAM FAST_LIO_SLAM is the integration of FAST_LIO and SC-PGO which is scan context based loop detection and GTSAM based pose-graph optimization https://github.com/gisbi-kim/FAST_LIO_SLAM &check; LidarIMUGPS [Optional] ROS 1 PCL >= 1.8Eigen >= 3.3.4 FD-SLAM FD_SLAM is Feature&Distribution-based 3D LiDAR SLAM method based on Surface Representation Refinement. In this algorithm novel feature-based Lidar odometry used for fast scan-matching, and used a proposed UGICP method for keyframe matching https://github.com/SLAMWang/FD-SLAM &check; LidarIMU [Optional]GPS ROS 1 PCLg2oSuitesparse hdl_graph_slam An open source ROS package for real-time 6DOF SLAM using a 3D LIDAR. It is based on 3D Graph SLAM with NDT scan matching-based odometry estimation and loop detection. It also supports several graph constraints, such as GPS, IMU acceleration (gravity vector), IMU orientation (magnetic sensor), and floor plane (detected in a point cloud) https://github.com/koide3/hdl_graph_slam &check; LidarIMU [Optional]GPS [Optional] ROS 1 PCLg2oOpenMP IA-LIO-SAM IA_LIO_SLAM is created for data acquisition in unstructured environment and it is a framework for Intensity and Ambient Enhanced Lidar Inertial Odometry via Smoothing and Mapping that achieves highly accurate robot trajectories and mapping https://github.com/minwoo0611/IA_LIO_SAM &check; LidarIMUGPS ROS 1 GTSAM ISCLOAM ISCLOAM presents a robust loop closure detection approach by integrating both geometry and intensity information https://github.com/wh200720041/iscloam &check; Lidar ROS 1 Ubuntu 18.04ROS MelodicCeresPCLGTSAMOpenCV LeGO-LOAM-BOR LeGO-LOAM-BOR is improved version of the LeGO-LOAM by improving quality of the code, making it more readable and consistent. Also, performance is improved by converting processes to multi-threaded approach https://github.com/facontidavide/LeGO-LOAM-BOR &check; LidarIMU ROS 1 ROS MelodicPCLGTSAM LIO_SAM A framework that achieves highly accurate, real-time mobile robot trajectory estimation and map-building. It formulates lidar-inertial odometry atop a factor graph, allowing a multitude of relative and absolute measurements, including loop closures, to be incorporated from different sources as factors into the system https://github.com/TixiaoShan/LIO-SAM &check; LidarIMUGPS [Optional] ROS 1ROS 2 PCLGTSAM Optimized-SC-F-LOAM An improved version of F-LOAM and uses an adaptive threshold to further judge the loop closure detection results and reducing false loop closure detections. Also it uses feature point-based matching to calculate the constraints between a pair of loop closure frame point clouds and decreases time consumption of constructing loop frame constraints https://github.com/SlamCabbage/Optimized-SC-F-LOAM &check; Lidar ROS 1 PCLGTSAMCeres SC-A-LOAM A real-time LiDAR SLAM package that integrates A-LOAM and ScanContext. https://github.com/gisbi-kim/SC-A-LOAM &check; Lidar ROS 1 GTSAM >= 4.0 SC-LeGO-LOAM SC-LeGO-LOAM integrated LeGO-LOAM for lidar odometry and 2 different loop closure methods: ScanContext and Radius search based loop closure. While ScanContext is correcting large drifts, radius search based method is good for fine-stitching https://github.com/irapkaist/SC-LeGO-LOAM &check; LidarIMU ROS 1 PCLGTSAM"},{"location":"how-to-guides/integrating-autoware/creating-maps/open-source-slam/fast-lio-lc/","title":"FAST_LIO_LC","text":""},{"location":"how-to-guides/integrating-autoware/creating-maps/open-source-slam/fast-lio-lc/#fast_lio_lc","title":"FAST_LIO_LC","text":""},{"location":"how-to-guides/integrating-autoware/creating-maps/open-source-slam/fast-lio-lc/#what-is-fast_lio_lc","title":"What is FAST_LIO_LC?","text":"
    • A computationally efficient and robust LiDAR-inertial odometry package with loop closure module and graph optimization.
    "},{"location":"how-to-guides/integrating-autoware/creating-maps/open-source-slam/fast-lio-lc/#repository-information","title":"Repository Information","text":""},{"location":"how-to-guides/integrating-autoware/creating-maps/open-source-slam/fast-lio-lc/#original-repository-link","title":"Original Repository link","text":"

    https://github.com/yanliang-wang/FAST_LIO_LC

    "},{"location":"how-to-guides/integrating-autoware/creating-maps/open-source-slam/fast-lio-lc/#required-sensors","title":"Required Sensors","text":"
    • LIDAR [Velodyne, Ouster, Livox]
    • IMU [6-AXIS, 9-AXIS]
    • GPS [Optional]
    "},{"location":"how-to-guides/integrating-autoware/creating-maps/open-source-slam/fast-lio-lc/#ros-compatibility","title":"ROS Compatibility","text":"
    • ROS 1
    "},{"location":"how-to-guides/integrating-autoware/creating-maps/open-source-slam/fast-lio-lc/#dependencies","title":"Dependencies","text":"
    • Ubuntu 18.04
    • ROS Melodic
    • PCL >= 1.8, Follow PCL Installation.
    • Eigen >= 3.3.4, Follow Eigen Installation.
    • GTSAM >= 4.0.0, Follow GTSAM Installation.
      wget -O ~/Downloads/gtsam.zip https://github.com/borglab/gtsam/archive/4.0.0-alpha2.zip\n  cd ~/Downloads/ && unzip gtsam.zip -d ~/Downloads/\n  cd ~/Downloads/gtsam-4.0.0-alpha2/\n  mkdir build && cd build\n  cmake ..\n  sudo make install\n
    "},{"location":"how-to-guides/integrating-autoware/creating-maps/open-source-slam/fast-lio-lc/#build-run","title":"Build & Run","text":""},{"location":"how-to-guides/integrating-autoware/creating-maps/open-source-slam/fast-lio-lc/#1-build","title":"1) Build","text":"
        mkdir -p ~/ws_fastlio_lc/src\n    cd ~/ws_fastlio_lc/src\n    git clone https://github.com/gisbi-kim/FAST_LIO_SLAM.git\n    git clone https://github.com/Livox-SDK/livox_ros_driver\n    cd ..\n    catkin_make\n
    "},{"location":"how-to-guides/integrating-autoware/creating-maps/open-source-slam/fast-lio-lc/#2-set-parameters","title":"2) Set parameters","text":"
    • After downloading the repository, change topic and sensor settings on the config file (workspace/src/FAST_LIO_LC/FAST_LIO/config/ouster64_mulran.yaml) with the lidar topic name in your bag file.
    • For imu-lidar compatibility, extrinsic matrices from calibration must be changed.
    • To enable auto-save, pcd_save_enable must be 1 from the launch file (workspace/src/FAST_LIO_LC/FAST_LIO/launch/mapping_ouster64_mulran.launch).
    "},{"location":"how-to-guides/integrating-autoware/creating-maps/open-source-slam/fast-lio-lc/#3-run","title":"3) Run","text":"
    • For Ouster OS1-64
      # open new terminal: run FAST-LIO\nroslaunch fast_lio mapping_ouster64.launch\n\n# open the other terminal tab: run SC-PGO\nroslaunch aloam_velodyne fastlio_ouster64.launch\n\n# play bag file in the other terminal\nrosbag play RECORDED_BAG.bag --clock\n
    "},{"location":"how-to-guides/integrating-autoware/creating-maps/open-source-slam/fast-lio-lc/#example-result","title":"Example Result","text":""},{"location":"how-to-guides/integrating-autoware/creating-maps/open-source-slam/fast-lio-lc/#other-examples","title":"Other Examples","text":""},{"location":"how-to-guides/integrating-autoware/creating-maps/open-source-slam/fast-lio-lc/#example-dataset","title":"Example dataset","text":"

    Check original repository link for example dataset.

    "},{"location":"how-to-guides/integrating-autoware/creating-maps/open-source-slam/fast-lio-lc/#contact","title":"Contact","text":"
    • Maintainer: Yanliang Wang (wyl410922@qq.com)
    "},{"location":"how-to-guides/integrating-autoware/creating-maps/open-source-slam/fast-lio-lc/#acknowledgements","title":"Acknowledgements","text":"
    • Thanks for FAST_LIO authors.
    "},{"location":"how-to-guides/integrating-autoware/creating-maps/open-source-slam/fast-lio-slam/","title":"FAST_LIO_SLAM","text":""},{"location":"how-to-guides/integrating-autoware/creating-maps/open-source-slam/fast-lio-slam/#fast_lio_slam","title":"FAST_LIO_SLAM","text":""},{"location":"how-to-guides/integrating-autoware/creating-maps/open-source-slam/fast-lio-slam/#what-is-fast_lio_slam","title":"What is FAST_LIO_SLAM?","text":"
    • FAST_LIO_SLAM is the integration of FAST_LIO and SC-PGO which is scan context based loop detection and GTSAM based pose-graph optimization.
    "},{"location":"how-to-guides/integrating-autoware/creating-maps/open-source-slam/fast-lio-slam/#repository-information","title":"Repository Information","text":""},{"location":"how-to-guides/integrating-autoware/creating-maps/open-source-slam/fast-lio-slam/#original-repository-link","title":"Original Repository link","text":"

    https://github.com/gisbi-kim/FAST_LIO_SLAM

    "},{"location":"how-to-guides/integrating-autoware/creating-maps/open-source-slam/fast-lio-slam/#required-sensors","title":"Required Sensors","text":"
    • LIDAR [Livox, Velodyne, Ouster]
    • IMU [6-AXIS, 9-AXIS]
    • GPS [OPTIONAL]
    "},{"location":"how-to-guides/integrating-autoware/creating-maps/open-source-slam/fast-lio-slam/#ros-compatibility","title":"ROS Compatibility","text":"
    • ROS 1
    "},{"location":"how-to-guides/integrating-autoware/creating-maps/open-source-slam/fast-lio-slam/#dependencies","title":"Dependencies","text":"
    • ROS
    • PCL
    • GTSAM
    wget -O ~/Downloads/gtsam.zip https://github.com/borglab/gtsam/archive/4.0.0-alpha2.zip\ncd ~/Downloads/ && unzip gtsam.zip -d ~/Downloads/\ncd ~/Downloads/gtsam-4.0.0-alpha2/\nmkdir build && cd build\ncmake ..\nsudo make install\n
    • PCL >= 1.8, Follow PCL Installation.
    • Eigen >= 3.3.4, Follow Eigen Installation.
    "},{"location":"how-to-guides/integrating-autoware/creating-maps/open-source-slam/fast-lio-slam/#build-run","title":"Build & Run","text":""},{"location":"how-to-guides/integrating-autoware/creating-maps/open-source-slam/fast-lio-slam/#1-build","title":"1) Build","text":"
        mkdir -p ~/catkin_fastlio_slam/src\n    cd ~/catkin_fastlio_slam/src\n    git clone https://github.com/gisbi-kim/FAST_LIO_SLAM.git\n    git clone https://github.com/Livox-SDK/livox_ros_driver\n    cd ..\n    catkin_make\n    source devel/setup.bash\n
    "},{"location":"how-to-guides/integrating-autoware/creating-maps/open-source-slam/fast-lio-slam/#2-set-parameters","title":"2) Set parameters","text":"
    • Set imu and lidar topic on Fast_LIO/config/ouster64.yaml
    "},{"location":"how-to-guides/integrating-autoware/creating-maps/open-source-slam/fast-lio-slam/#3-run","title":"3) Run","text":"
        # terminal 1: run FAST-LIO2\nroslaunch fast_lio mapping_ouster64.launch\n\n    # open the other terminal tab: run SC-PGO\ncd ~/catkin_fastlio_slam\n    source devel/setup.bash\n    roslaunch aloam_velodyne fastlio_ouster64.launch\n\n    # play bag file in the other terminal\nrosbag play xxx.bag -- clock --pause\n
    "},{"location":"how-to-guides/integrating-autoware/creating-maps/open-source-slam/fast-lio-slam/#example-result","title":"Example Result","text":""},{"location":"how-to-guides/integrating-autoware/creating-maps/open-source-slam/fast-lio-slam/#other-examples","title":"Other Examples","text":"
    • Tutorial video 1 (using KAIST 03 sequence of MulRan dataset)

      • Example result captures

      • download the KAIST 03 pcd map made by FAST-LIO-SLAM, 500MB
    • Example Video 2 (Riverside 02 sequence of MulRan dataset)
      • Example result captures

      • download the Riverside 02 pcd map made by FAST-LIO-SLAM, 400MB
    "},{"location":"how-to-guides/integrating-autoware/creating-maps/open-source-slam/fast-lio-slam/#acknowledgements","title":"Acknowledgements","text":"
    • Thanks for FAST_LIO authors.
    • You may have an interest in this version of FAST-LIO + Loop closure, implemented by yanliang-wang
    • Maintainer: Giseop Kim (paulgkim@kaist.ac.kr)
    "},{"location":"how-to-guides/integrating-autoware/creating-maps/open-source-slam/fd-slam/","title":"FD-SLAM","text":""},{"location":"how-to-guides/integrating-autoware/creating-maps/open-source-slam/fd-slam/#fd-slam","title":"FD-SLAM","text":""},{"location":"how-to-guides/integrating-autoware/creating-maps/open-source-slam/fd-slam/#what-is-fd-slam","title":"What is FD-SLAM?","text":"
    • FD_SLAM is Feature&Distribution-based 3D LiDAR SLAM method based on Surface Representation Refinement. In this algorithm novel feature-based Lidar odometry used for fast scan-matching, and used a proposed UGICP method for keyframe matching.
    "},{"location":"how-to-guides/integrating-autoware/creating-maps/open-source-slam/fd-slam/#repository-information","title":"Repository Information","text":"

    This is an open source ROS package for real-time 6DOF SLAM using a 3D LIDAR.

    It is based on hdl_graph_slam and the steps to run our system are same with hdl-graph-slam.

    "},{"location":"how-to-guides/integrating-autoware/creating-maps/open-source-slam/fd-slam/#original-repository-link","title":"Original Repository link","text":"

    https://github.com/SLAMWang/FD-SLAM

    "},{"location":"how-to-guides/integrating-autoware/creating-maps/open-source-slam/fd-slam/#required-sensors","title":"Required Sensors","text":"
    • LIDAR[VLP-16, HDL-32, HDL-64, OS1-64]
    • GPS
    • IMU [Optional]
    "},{"location":"how-to-guides/integrating-autoware/creating-maps/open-source-slam/fd-slam/#ros-compatibility","title":"ROS Compatibility","text":"
    • ROS 1
    "},{"location":"how-to-guides/integrating-autoware/creating-maps/open-source-slam/fd-slam/#dependencies","title":"Dependencies","text":"
    • ROS
    • PCL
    • g2o
    • Suitesparse

    The following ROS packages are required:

    • geodesy
    • nmea_msgs
    • pcl_ros
    • ndt_omp
    • U_gicp This is modified based on fast_gicp by us. We use UGICP for keyframe matching.
    "},{"location":"how-to-guides/integrating-autoware/creating-maps/open-source-slam/fd-slam/#build-run","title":"Build & Run","text":""},{"location":"how-to-guides/integrating-autoware/creating-maps/open-source-slam/fd-slam/#1-build","title":"1) Build","text":"
    cd ~/catkin_ws/src\ngit clone https://github.com/SLAMWang/FD-SLAM.git\ncd ..\ncatkin_make\n
    "},{"location":"how-to-guides/integrating-autoware/creating-maps/open-source-slam/fd-slam/#2-services","title":"2) Services","text":"
    /hdl_graph_slam/dump  (hdl_graph_slam/DumpGraph)\n- save all the internal data (point clouds, floor coeffs, odoms, and pose graph) to a directory.\n\n/hdl_graph_slam/save_map (hdl_graph_slam/SaveMap)\n- save the generated map as a PCD file.\n
    "},{"location":"how-to-guides/integrating-autoware/creating-maps/open-source-slam/fd-slam/#3-set-parameters","title":"3) Set parameters","text":"
    • All the configurable parameters are listed in launch/****.launch as ros params.
    "},{"location":"how-to-guides/integrating-autoware/creating-maps/open-source-slam/fd-slam/#4-run","title":"4) Run","text":"
    source devel/setup.bash\nroslaunch hdl_graph_slam hdl_graph_slam_400_ours.launch\n
    "},{"location":"how-to-guides/integrating-autoware/creating-maps/open-source-slam/hdl-graph-slam/","title":"hdl_graph_slam","text":""},{"location":"how-to-guides/integrating-autoware/creating-maps/open-source-slam/hdl-graph-slam/#hdl_graph_slam","title":"hdl_graph_slam","text":""},{"location":"how-to-guides/integrating-autoware/creating-maps/open-source-slam/hdl-graph-slam/#what-is-hdl_graph_slam","title":"What is hdl_graph_slam?","text":"
    • An open source ROS package for real-time 6DOF SLAM using a 3D LIDAR. It is based on 3D Graph SLAM with NDT scan matching-based odometry estimation and loop detection. It also supports several graph constraints, such as GPS, IMU acceleration (gravity vector), IMU orientation (magnetic sensor), and floor plane (detected in a point cloud).
    "},{"location":"how-to-guides/integrating-autoware/creating-maps/open-source-slam/hdl-graph-slam/#repository-information","title":"Repository Information","text":""},{"location":"how-to-guides/integrating-autoware/creating-maps/open-source-slam/hdl-graph-slam/#original-repository-link","title":"Original Repository link","text":"

    https://github.com/koide3/hdl_graph_slam

    "},{"location":"how-to-guides/integrating-autoware/creating-maps/open-source-slam/hdl-graph-slam/#required-sensors","title":"Required Sensors","text":"
    • LIDAR [Velodyne, Ouster, RoboSense]
    • IMU [6-AXIS, 9-AXIS] [OPTIONAL]
    • GPS [OPTIONAL]
    "},{"location":"how-to-guides/integrating-autoware/creating-maps/open-source-slam/hdl-graph-slam/#ros-compatibility","title":"ROS Compatibility","text":"
    • ROS 1
    "},{"location":"how-to-guides/integrating-autoware/creating-maps/open-source-slam/hdl-graph-slam/#dependencies","title":"Dependencies","text":"
    • ROS
    • PCL
    • g2o
    • OpenMP

    The following ROS packages are required:

    • geodesy
    • nmea_msgs
    • pcl_ros
    • ndt_omp
    • fast_gicp
    "},{"location":"how-to-guides/integrating-autoware/creating-maps/open-source-slam/hdl-graph-slam/#build-run","title":"Build & Run","text":""},{"location":"how-to-guides/integrating-autoware/creating-maps/open-source-slam/hdl-graph-slam/#1-build","title":"1) Build","text":"
    # for melodic\nsudo apt-get install ros-melodic-geodesy ros-melodic-pcl-ros ros-melodic-nmea-msgs ros-melodic-libg2o\ncd catkin_ws/src\ngit clone https://github.com/koide3/ndt_omp.git -b melodic\ngit clone https://github.com/SMRT-AIST/fast_gicp.git --recursive\ngit clone https://github.com/koide3/hdl_graph_slam\n\ncd .. && catkin_make -DCMAKE_BUILD_TYPE=Release\n\n# for noetic\nsudo apt-get install ros-noetic-geodesy ros-noetic-pcl-ros ros-noetic-nmea-msgs ros-noetic-libg2o\n\ncd catkin_ws/src\ngit clone https://github.com/koide3/ndt_omp.git\ngit clone https://github.com/SMRT-AIST/fast_gicp.git --recursive\ngit clone https://github.com/koide3/hdl_graph_slam\n\ncd .. && catkin_make -DCMAKE_BUILD_TYPE=Release\n
    "},{"location":"how-to-guides/integrating-autoware/creating-maps/open-source-slam/hdl-graph-slam/#2-set-parameter","title":"2) Set parameter","text":"
    • Set lidar topic on launch/hdl_graph_slam_400.launch
    • Set registration settings on launch/hdl_graph_slam_400.launch
    "},{"location":"how-to-guides/integrating-autoware/creating-maps/open-source-slam/hdl-graph-slam/#3-run","title":"3) Run","text":"
    rosparam set use_sim_time true\nroslaunch hdl_graph_slam hdl_graph_slam_400.launch\n
    roscd hdl_graph_slam/rviz\nrviz -d hdl_graph_slam.rviz\n
    rosbag play --clock hdl_400.bag\n

    Save the generated map by:

    rosservice call /hdl_graph_slam/save_map \"resolution: 0.05\ndestination: '/full_path_directory/map.pcd'\"\n
    "},{"location":"how-to-guides/integrating-autoware/creating-maps/open-source-slam/hdl-graph-slam/#example-result","title":"Example Result","text":""},{"location":"how-to-guides/integrating-autoware/creating-maps/open-source-slam/hdl-graph-slam/#example2-outdoor","title":"Example2 (Outdoor)","text":"

    Bag file (recorded in an outdoor environment):

    • hdl_400.bag.tar.gz (raw data, about 900MB)
    rosparam set use_sim_time true\nroslaunch hdl_graph_slam hdl_graph_slam_400.launch\n
    roscd hdl_graph_slam/rviz\nrviz -d hdl_graph_slam.rviz\n
    rosbag play --clock dataset.bag\n

    "},{"location":"how-to-guides/integrating-autoware/creating-maps/open-source-slam/hdl-graph-slam/#papers","title":"Papers","text":"

    Kenji Koide, Jun Miura, and Emanuele Menegatti, A Portable 3D LIDAR-based System for Long-term and Wide-area People Behavior Measurement, Advanced Robotic Systems, 2019 [link].

    "},{"location":"how-to-guides/integrating-autoware/creating-maps/open-source-slam/hdl-graph-slam/#contact","title":"Contact","text":"

    Kenji Koide, k.koide@aist.go.jp, https://staff.aist.go.jp/k.koide

    [Active Intelligent Systems Laboratory, Toyohashi University of Technology, Japan] [Mobile Robotics Research Team, National Institute of Advanced Industrial Science and Technology (AIST), Japan]

    "},{"location":"how-to-guides/integrating-autoware/creating-maps/open-source-slam/ia-lio-slam/","title":"IA-LIO-SAM","text":""},{"location":"how-to-guides/integrating-autoware/creating-maps/open-source-slam/ia-lio-slam/#ia-lio-sam","title":"IA-LIO-SAM","text":""},{"location":"how-to-guides/integrating-autoware/creating-maps/open-source-slam/ia-lio-slam/#what-is-ia-lio-sam","title":"What is IA-LIO-SAM?","text":"
    • IA_LIO_SLAM is created for data acquisition in unstructured environment and it is a framework for Intensity and Ambient Enhanced Lidar Inertial Odometry via Smoothing and Mapping that achieves highly accurate robot trajectories and mapping.
    "},{"location":"how-to-guides/integrating-autoware/creating-maps/open-source-slam/ia-lio-slam/#repository-information","title":"Repository Information","text":""},{"location":"how-to-guides/integrating-autoware/creating-maps/open-source-slam/ia-lio-slam/#original-repository-link","title":"Original Repository link","text":"

    https://github.com/minwoo0611/IA_LIO_SAM

    "},{"location":"how-to-guides/integrating-autoware/creating-maps/open-source-slam/ia-lio-slam/#required-sensors","title":"Required Sensors","text":"
    • LIDAR [Velodyne, Ouster]
    • IMU [9-AXIS]
    • GNSS
    "},{"location":"how-to-guides/integrating-autoware/creating-maps/open-source-slam/ia-lio-slam/#ros-compatibility","title":"ROS Compatibility","text":"
    • ROS 1
    "},{"location":"how-to-guides/integrating-autoware/creating-maps/open-source-slam/ia-lio-slam/#dependencies","title":"Dependencies","text":"
    • ROS (tested with Kinetic and Melodic)

      • for ROS melodic:

        sudo apt-get install -y ros-melodic-navigation\nsudo apt-get install -y ros-melodic-robot-localization\nsudo apt-get install -y ros-melodic-robot-state-publisher\n
      • for ROS kinetic:

        sudo apt-get install -y ros-kinetic-navigation\nsudo apt-get install -y ros-kinetic-robot-localization\nsudo apt-get install -y ros-kinetic-robot-state-publisher\n
    • GTSAM (Georgia Tech Smoothing and Mapping library)

      wget -O ~/Downloads/gtsam.zip https://github.com/borglab/gtsam/archive/4.0.2.zip\ncd ~/Downloads/ && unzip gtsam.zip -d ~/Downloads/\ncd ~/Downloads/gtsam-4.0.2/\nmkdir build && cd build\ncmake -DGTSAM_BUILD_WITH_MARCH_NATIVE=OFF ..\nsudo make install -j8\n
    "},{"location":"how-to-guides/integrating-autoware/creating-maps/open-source-slam/ia-lio-slam/#build-run","title":"Build & Run","text":""},{"location":"how-to-guides/integrating-autoware/creating-maps/open-source-slam/ia-lio-slam/#1-build","title":"1) Build","text":"
        mkdir -p ~/catkin_ia_lio/src\n    cd ~/catkin_ia_lio/src\n    git clone https://github.com/minwoo0611/IA_LIO_SAM\n    cd ..\n    catkin_make\n
    "},{"location":"how-to-guides/integrating-autoware/creating-maps/open-source-slam/ia-lio-slam/#2-set-parameters","title":"2) Set parameters","text":"
    • After downloading the repository, change topic and sensor settings on the config file (workspace/src/IA_LIO_SAM/config/params.yaml)
    • For imu-lidar compatibility, extrinsic matrices from calibration must be changed.
    • To enable autosave, savePCD must be true on the params.yaml file (workspace/src/IA_LIO_SAM/config/params.yaml).
    "},{"location":"how-to-guides/integrating-autoware/creating-maps/open-source-slam/ia-lio-slam/#3-run","title":"3) Run","text":"
      # open new terminal: run IA_LIO\n  source devel/setup.bash\n  roslaunch lio_sam mapping_ouster64.launch\n\n  # play bag file in the other terminal\n  rosbag play RECORDED_BAG.bag --clock\n
    "},{"location":"how-to-guides/integrating-autoware/creating-maps/open-source-slam/ia-lio-slam/#sample-dataset-images","title":"Sample dataset images","text":""},{"location":"how-to-guides/integrating-autoware/creating-maps/open-source-slam/ia-lio-slam/#example-dataset","title":"Example dataset","text":"

    Check original repo link for example dataset.

    "},{"location":"how-to-guides/integrating-autoware/creating-maps/open-source-slam/ia-lio-slam/#contact","title":"Contact","text":"
    • Maintainer: Kevin Jung (GitHub: minwoo0611)
    "},{"location":"how-to-guides/integrating-autoware/creating-maps/open-source-slam/ia-lio-slam/#paper","title":"Paper","text":"

    Thank you for citing IA-LIO-SAM(./config/doc/KRS-2021-17.pdf) if you use any of this code.

    Part of the code is adapted from LIO-SAM (IROS-2020).

    @inproceedings{legoloam2018shan,\n  title={LeGO-LOAM: Lightweight and Ground-Optimized Lidar Odometry and Mapping on Variable Terrain},\n  author={Shan, Tixiao and Englot, Brendan},\n  booktitle={IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)},\n  pages={4758-4765},\n  year={2018},\n  organization={IEEE}\n}\n
    "},{"location":"how-to-guides/integrating-autoware/creating-maps/open-source-slam/ia-lio-slam/#acknowledgements","title":"Acknowledgements","text":"
    • IA-LIO-SAM is based on LIO-SAM (T. Shan, B. Englot, D. Meyers, W. Wang, C. Ratti, and D. Rus. LIO-SAM: Tightly-coupled Lidar Inertial Odometry via Smoothing and Mapping).
    "},{"location":"how-to-guides/integrating-autoware/creating-maps/open-source-slam/iscloam/","title":"ISCLOAM","text":""},{"location":"how-to-guides/integrating-autoware/creating-maps/open-source-slam/iscloam/#iscloam","title":"ISCLOAM","text":""},{"location":"how-to-guides/integrating-autoware/creating-maps/open-source-slam/iscloam/#what-is-iscloam","title":"What is ISCLOAM?","text":"
    • ISCLOAM presents a robust loop closure detection approach by integrating both geometry and intensity information.
    "},{"location":"how-to-guides/integrating-autoware/creating-maps/open-source-slam/iscloam/#repository-information","title":"Repository Information","text":""},{"location":"how-to-guides/integrating-autoware/creating-maps/open-source-slam/iscloam/#original-repository-link","title":"Original Repository link","text":"

    https://github.com/wh200720041/iscloam

    "},{"location":"how-to-guides/integrating-autoware/creating-maps/open-source-slam/iscloam/#required-sensors","title":"Required Sensors","text":"
    • LIDAR [Velodyne]
    "},{"location":"how-to-guides/integrating-autoware/creating-maps/open-source-slam/iscloam/#ros-compatibility","title":"ROS Compatibility","text":"
    • ROS 1
    "},{"location":"how-to-guides/integrating-autoware/creating-maps/open-source-slam/iscloam/#dependencies","title":"Dependencies","text":"
    • Ubuntu 64-bit 18.04
    • ROS Melodic ROS Installation
    • Ceres Solver Ceres Installation
    • PCL PCL Installation
    • Gtsam GTSAM Installation
    • OpenCV OPENCV Installation
    • Trajectory visualization

    For visualization purpose, this package uses hector trajectory sever, you may install the package by

    sudo apt-get install ros-melodic-hector-trajectory-server\n
    "},{"location":"how-to-guides/integrating-autoware/creating-maps/open-source-slam/iscloam/#build-and-run","title":"Build and Run","text":""},{"location":"how-to-guides/integrating-autoware/creating-maps/open-source-slam/iscloam/#1-clone-repository","title":"1. Clone repository","text":"
    cd ~/catkin_ws/src\ngit clone https://github.com/wh200720041/iscloam.git\ncd ..\ncatkin_make -j1\nsource ~/catkin_ws/devel/setup.bash\n
    "},{"location":"how-to-guides/integrating-autoware/creating-maps/open-source-slam/iscloam/#2-set-parameter","title":"2. Set Parameter","text":"

    Change the bag location and sensor parameters on launch files.

    "},{"location":"how-to-guides/integrating-autoware/creating-maps/open-source-slam/iscloam/#3-launch","title":"3. Launch","text":"
    roslaunch iscloam iscloam.launch\n

    if you would like to generate the map of environment at the same time, you can run

    roslaunch iscloam iscloam_mapping.launch\n

    Note that the global map can be very large, so it may takes a while to perform global optimization, some lag is expected between trajectory and map since they are running in separate thread. More CPU usage will happen when loop closure is identified.

    "},{"location":"how-to-guides/integrating-autoware/creating-maps/open-source-slam/iscloam/#example-result","title":"Example Result","text":"

    Watch demo video at Video Link

    "},{"location":"how-to-guides/integrating-autoware/creating-maps/open-source-slam/iscloam/#ground-truth-comparison","title":"Ground Truth Comparison","text":"

    Green: ISCLOAM Red: Ground Truth

                      KITTI sequence 00                                  KITTI sequence 05\n
    "},{"location":"how-to-guides/integrating-autoware/creating-maps/open-source-slam/iscloam/#citation","title":"Citation","text":"

    If you use this work for your research, you may want to cite the paper below, your citation will be appreciated

    @inproceedings{wang2020intensity,\n  author={H. {Wang} and C. {Wang} and L. {Xie}},\n  booktitle={2020 IEEE International Conference on Robotics and Automation (ICRA)},\n  title={Intensity Scan Context: Coding Intensity and Geometry Relations for Loop Closure Detection},\n  year={2020},\n  volume={},\n  number={},\n  pages={2095-2101},\n  doi={10.1109/ICRA40945.2020.9196764}\n}\n
    "},{"location":"how-to-guides/integrating-autoware/creating-maps/open-source-slam/iscloam/#acknowledgements","title":"Acknowledgements","text":"

    Thanks for A-LOAM and LOAM(J. Zhang and S. Singh. LOAM: Lidar Odometry and Mapping in Real-time) and LOAM_NOTED.

    Author: Wang Han, Nanyang Technological University, Singapore

    "},{"location":"how-to-guides/integrating-autoware/creating-maps/open-source-slam/lego-loam-bor/","title":"LeGO-LOAM-BOR","text":""},{"location":"how-to-guides/integrating-autoware/creating-maps/open-source-slam/lego-loam-bor/#lego-loam-bor","title":"LeGO-LOAM-BOR","text":""},{"location":"how-to-guides/integrating-autoware/creating-maps/open-source-slam/lego-loam-bor/#what-is-lego-loam-bor","title":"What is LeGO-LOAM-BOR?","text":"
    • LeGO-LOAM-BOR is improved version of the LeGO-LOAM by improving quality of the code, making it more readable and consistent. Also, performance is improved by converting processes to multi-threaded approach.
    "},{"location":"how-to-guides/integrating-autoware/creating-maps/open-source-slam/lego-loam-bor/#repository-information","title":"Repository Information","text":""},{"location":"how-to-guides/integrating-autoware/creating-maps/open-source-slam/lego-loam-bor/#original-repository-link","title":"Original Repository link","text":"

    https://github.com/facontidavide/LeGO-LOAM-BOR

    "},{"location":"how-to-guides/integrating-autoware/creating-maps/open-source-slam/lego-loam-bor/#required-sensors","title":"Required Sensors","text":"
    • LIDAR [VLP-16]
    • IMU [9-AXIS]
    "},{"location":"how-to-guides/integrating-autoware/creating-maps/open-source-slam/lego-loam-bor/#ros-compatibility","title":"ROS Compatibility","text":"
    • ROS 1
    "},{"location":"how-to-guides/integrating-autoware/creating-maps/open-source-slam/lego-loam-bor/#dependencies","title":"Dependencies","text":"
    • ROS Melodic ROS Installation
    • PCL PCL Installation
    • Gtsam GTSAM Installation
    wget -O ~/Downloads/gtsam.zip https://github.com/borglab/gtsam/archive/4.0.0-alpha2.zip\ncd ~/Downloads/ && unzip gtsam.zip -d ~/Downloads/\ncd ~/Downloads/gtsam-4.0.0-alpha2/\nmkdir build && cd build\ncmake ..\nsudo make install\n
    "},{"location":"how-to-guides/integrating-autoware/creating-maps/open-source-slam/lego-loam-bor/#build-run","title":"Build & Run","text":""},{"location":"how-to-guides/integrating-autoware/creating-maps/open-source-slam/lego-loam-bor/#1-build","title":"1) Build","text":"
    cd ~/catkin_ws/src\ngit clone https://github.com/facontidavide/LeGO-LOAM-BOR.git\ncd ..\ncatkin_make\n
    "},{"location":"how-to-guides/integrating-autoware/creating-maps/open-source-slam/lego-loam-bor/#2-set-parameters","title":"2) Set parameters","text":"
    • Set parameters on LeGo-LOAM/loam_config.yaml
    "},{"location":"how-to-guides/integrating-autoware/creating-maps/open-source-slam/lego-loam-bor/#3-run","title":"3) Run","text":"
    source devel/setup.bash\nroslaunch lego_loam_bor run.launch rosbag:=/path/to/your/rosbag lidar_topic:=/velodyne_points\n
    "},{"location":"how-to-guides/integrating-autoware/creating-maps/open-source-slam/lego-loam-bor/#example-result","title":"Example Result","text":""},{"location":"how-to-guides/integrating-autoware/creating-maps/open-source-slam/lego-loam-bor/#cite-lego-loam","title":"Cite LeGO-LOAM","text":"

    Thank you for citing our LeGO-LOAM paper if you use any of this code:

    @inproceedings{legoloam2018,\n  title={LeGO-LOAM: Lightweight and Ground-Optimized Lidar Odometry and Mapping on Variable Terrain},\n  author={Tixiao Shan and Brendan Englot},\n  booktitle={IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)},\n  pages={4758-4765},\n  year={2018},\n  organization={IEEE}\n}\n
    "},{"location":"how-to-guides/integrating-autoware/creating-maps/open-source-slam/lio-sam/","title":"LIO-SAM","text":""},{"location":"how-to-guides/integrating-autoware/creating-maps/open-source-slam/lio-sam/#lio-sam","title":"LIO-SAM","text":""},{"location":"how-to-guides/integrating-autoware/creating-maps/open-source-slam/lio-sam/#what-is-lio-sam","title":"What is LIO-SAM?","text":"
    • A framework that achieves highly accurate, real-time mobile robot trajectory estimation and map-building. It formulates lidar-inertial odometry atop a factor graph, allowing a multitude of relative and absolute measurements, including loop closures, to be incorporated from different sources as factors into the system.
    "},{"location":"how-to-guides/integrating-autoware/creating-maps/open-source-slam/lio-sam/#repository-information","title":"Repository Information","text":""},{"location":"how-to-guides/integrating-autoware/creating-maps/open-source-slam/lio-sam/#original-repository-link","title":"Original Repository link","text":"

    https://github.com/TixiaoShan/LIO-SAM

    "},{"location":"how-to-guides/integrating-autoware/creating-maps/open-source-slam/lio-sam/#required-sensors","title":"Required Sensors","text":"
    • LIDAR [Livox, Velodyne, Ouster, Robosense*]
    • IMU [9-AXIS]
    • GPS [OPTIONAL]

    *Robosense lidars aren't supported officially, but their Helios series can be used as Velodyne lidars.

    The system architecture of LIO-SAM method described in the following diagram, please look at the official repository for getting more information.

    System Architecture of LIO-SAM

    We are using Robosense Helios 5515 and CLAP B7 sensor on tutorial_vehicle, so we will use these sensors for running LIO-SAM.

    Additionally, LIO-SAM tested with Applanix POS LVX and Hesai Pandar XT32 sensor setup. Some additional information according to the sensors will be provided in this page.

    "},{"location":"how-to-guides/integrating-autoware/creating-maps/open-source-slam/lio-sam/#ros-compatibility","title":"ROS Compatibility","text":"

    Since Autoware uses ROS 2 Humble currently, we will continue with ROS 2 version of LIO-SAM.

    • ROS
    • ROS 2 (Also, it is compatible with Humble distro)
    "},{"location":"how-to-guides/integrating-autoware/creating-maps/open-source-slam/lio-sam/#dependencies","title":"Dependencies","text":"

    ROS 2 dependencies:

    • perception-pcl
    • pcl-msgs
    • vision-opencv
    • xacro

    To install these dependencies, you can use this bash command in your terminal:

    sudo apt install ros-humble-perception-pcl \\\nros-humble-pcl-msgs \\\nros-humble-vision-opencv \\\nros-humble-xacro\n

    Other dependencies:

    • gtsam (Georgia Tech Smoothing and Mapping library)

    To install the gtsam, you can use this bash command in your terminal:

      # Add GTSAM-PPA\nsudo add-apt-repository ppa:borglab/gtsam-release-4.1\n  sudo apt install libgtsam-dev libgtsam-unstable-dev\n
    "},{"location":"how-to-guides/integrating-autoware/creating-maps/open-source-slam/lio-sam/#build-run","title":"Build & Run","text":""},{"location":"how-to-guides/integrating-autoware/creating-maps/open-source-slam/lio-sam/#1-installation","title":"1) Installation","text":"

    In order to use and build LIO-SAM, we will create workspace for LIO-SAM:

        mkdir -p ~/lio-sam-ws/src\n    cd ~/lio-sam-ws/src\n    git clone -b ros2 https://github.com/TixiaoShan/LIO-SAM.git\n    cd ..\n    colcon build --symlink-install --cmake-args -DCMAKE_BUILD_TYPE=Release\n
    "},{"location":"how-to-guides/integrating-autoware/creating-maps/open-source-slam/lio-sam/#2-settings","title":"2) Settings","text":"

    After the building of LIO-SAM, we need to record ROS 2 Bag file with including necessary topics for LIO-SAM. The necessary topics are described in the config file on LIO-SAM.

    ROS 2 Bag example for LIO-SAM with Robosense Helios and CLAP B7
    Files:             map_bag_13_09_0.db3\nBag size:          38.4 GiB\nStorage id:        sqlite3\nDuration:          3295.326s\nStart:             Sep 13 2023 16:40:23.165 (1694612423.165)\nEnd:               Sep 13 2023 17:35:18.492 (1694615718.492)\nMessages:          1627025\nTopic information: Topic: /sensing/gnss/clap/ros/imu | Type: sensor_msgs/msg/Imu | Count: 329535 | Serialization Format: cdr\nTopic: /sensing/gnss/clap/ros/odometry | Type: nav_msgs/msg/Odometry | Count: 329533 | Serialization Format: cdr\nTopic: /sensing/lidar/top/pointcloud_raw | Type: sensor_msgs/msg/PointCloud2 | Count: 32953 | Serialization Format: cdr\n

    Note: We use use_odometry as true at clap_b7_driver for publishing GPS odometry topic from navsatfix.

    Please set topics and sensor settings on lio_sam/config/params.yaml. Here are some example modifications for out tutorial_vehicle.

    • Topic names:
    -   pointCloudTopic: \"/points\"\n+   pointCloudTopic: \"/sensing/lidar/top/pointcloud_raw\"\n-   imuTopic: \"/imu/data\"\n+   imuTopic: \"/sensing/gnss/clap/ros/imu\"\n   odomTopic: \"odometry/imu\"\n-   gpsTopic: \"odometry/gpsz\"\n+   gpsTopic: \"/sensing/gnss/clap/ros/odometry\"\n

    Since we will use GPS information with Autoware, so we need to enable useImuHeadingInitialization parameter.

    • GPS settings:
    -   useImuHeadingInitialization: false\n+   useImuHeadingInitialization: true\n-   useGpsElevation: false\n+   useGpsElevation: true\n

    We will update sensor settings also. Since Robosense Lidars aren't officially supported, we will set our 32-channel Robosense Helios 5515 lidar as Velodyne:

    • Sensor settings:
    -   sensor: ouster\n+   sensor: velodyne\n-   N_SCAN: 64\n+   N_SCAN: 32\n-   Horizon_SCAN: 512\n+   Horizon_SCAN: 1800\n

    After that, we will update extrinsic transformations between Robosense Lidar and CLAP B7 GNSS/INS (IMU) system.

    • Extrinsic transformation:
    -   extrinsicTrans:  [ 0.0,  0.0,  0.0 ]\n+   extrinsicTrans:  [-0.91, 0.0, -1.71]\n-   extrinsicRot:    [-1.0,  0.0,  0.0,\n-                      0.0,  1.0,  0.0,\n-                      0.0,  0.0, -1.0 ]\n+   extrinsicRot:    [1.0,  0.0,  0.0,\n+                     0.0,  1.0,  0.0,\n+                     0.0,  0.0, 1.0 ]\n-   extrinsicRPY: [ 0.0,  1.0,  0.0,\n-                  -1.0,  0.0,  0.0,\n-                   0.0,  0.0,  1.0 ]\n+   extrinsicRPY: [ 1.0,  0.0,  0.0,\n+                   0.0,  1.0,  0.0,\n+                   0.0,  0.0,  1.0 ]\n

    Warning

    The mapping direction is towards to the going direction in the real world. If LiDAR sensor is backwards, according to the direction you are moving, then you need to change the extrinsicRot too. Unless the IMU tries to go in the wrong direction, and it may occur problems.

    For example, in our Applanix POS LVX and Hesai Pandar XT32 setup, IMU direction was towards to the going direction and LiDAR direction has 180 degree difference in Z-axis according to the IMU direction. In other words, they were facing back to each other. The tool may need a transformation for IMU for that.

    • In that situation, the calibration parameters changed as this:
    -   extrinsicRot:    [-1.0,  0.0,  0.0,\n-                      0.0,  1.0,  0.0,\n-                      0.0,  0.0, -1.0 ]\n+   extrinsicRot:    [-1.0,  0.0,  0.0,\n+                     0.0,  -1.0,  0.0,\n+                     0.0,   0.0,  1.0 ]\n-   extrinsicRPY: [ 0.0,  1.0,  0.0,\n-                  -1.0,  0.0,  0.0,\n-                   0.0,  0.0,  1.0 ]\n+   extrinsicRPY: [ -1.0,  0.0,  0.0,\n+                    0.0, -1.0,  0.0,\n+                    0.0,  0.0,  1.0 ]\n
    • In the end, we got this transform visualization in RViz:

    Transform Visualization of Applanix POS LVX and Hesai Pandar XT32 in RViz

    Now, we are ready to create a map for Autoware.

    "},{"location":"how-to-guides/integrating-autoware/creating-maps/open-source-slam/lio-sam/#3-usage","title":"3) Usage","text":"

    If you are set configurations and create bag file for LIO-SAM, you can launch LIO-SAM with:

    ros2 launch lio_sam run.launch.py\n

    The rviz2 screen will be open, then you can play your bag file:

    ros2 bag play <YOUR-BAG-FILE>\n

    If the mapping process is finished, you can save map with calling this service:

    ros2 service call /lio_sam/save_map lio_sam/srv/SaveMap \"{resolution: 0.2, destination: <YOUR-MAP-DIRECTORY>}\"\n

    Here is the video for demonstration of LIO-SAM mapping in our campus environment:

    The output map format is local UTM, we will change local UTM map to MGRS format for tutorial_vehicle. Also, if you want change UTM to MGRS for autoware, please follow convert-utm-to-mgrs-map page.

    "},{"location":"how-to-guides/integrating-autoware/creating-maps/open-source-slam/lio-sam/#example-result","title":"Example Result","text":"Sample Map Output for our Campus Environment"},{"location":"how-to-guides/integrating-autoware/creating-maps/open-source-slam/lio-sam/#paper","title":"Paper","text":"

    Thank you for citing LIO-SAM (IROS-2020) if you use any of this code.

    @inproceedings{liosam2020shan,\n  title={LIO-SAM: Tightly-coupled Lidar Inertial Odometry via Smoothing and Mapping},\n  author={Shan, Tixiao and Englot, Brendan and Meyers, Drew and Wang, Wei and Ratti, Carlo and Rus Daniela},\n  booktitle={IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)},\n  pages={5135-5142},\n  year={2020},\n  organization={IEEE}\n}\n

    Part of the code is adapted from LeGO-LOAM.

    @inproceedings{legoloam2018shan,\n  title={LeGO-LOAM: Lightweight and Ground-Optimized Lidar Odometry and Mapping on Variable Terrain},\n  author={Shan, Tixiao and Englot, Brendan},\n  booktitle={IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)},\n  pages={4758-4765},\n  year={2018},\n  organization={IEEE}\n}\n
    "},{"location":"how-to-guides/integrating-autoware/creating-maps/open-source-slam/lio-sam/#acknowledgements","title":"Acknowledgements","text":"
    • LIO-SAM is based on LOAM (J. Zhang and S. Singh. LOAM: Lidar Odometry and Mapping in Real-time).
    "},{"location":"how-to-guides/integrating-autoware/creating-maps/open-source-slam/optimized-sc-f-loam/","title":"Optimized-SC-F-LOAM","text":""},{"location":"how-to-guides/integrating-autoware/creating-maps/open-source-slam/optimized-sc-f-loam/#optimized-sc-f-loam","title":"Optimized-SC-F-LOAM","text":""},{"location":"how-to-guides/integrating-autoware/creating-maps/open-source-slam/optimized-sc-f-loam/#what-is-optimized-sc-f-loam","title":"What is Optimized-SC-F-LOAM?","text":"
    • An improved version of F-LOAM and uses an adaptive threshold to further judge the loop closure detection results and reducing false loop closure detections. Also it uses feature point-based matching to calculate the constraints between a pair of loop closure frame point clouds and decreases time consumption of constructing loop frame constraints.
    "},{"location":"how-to-guides/integrating-autoware/creating-maps/open-source-slam/optimized-sc-f-loam/#repository-information","title":"Repository Information","text":""},{"location":"how-to-guides/integrating-autoware/creating-maps/open-source-slam/optimized-sc-f-loam/#original-repository-link","title":"Original Repository link","text":"

    https://github.com/SlamCabbage/Optimized-SC-F-LOAM

    "},{"location":"how-to-guides/integrating-autoware/creating-maps/open-source-slam/optimized-sc-f-loam/#required-sensors","title":"Required Sensors","text":"
    • LIDAR [VLP-16, HDL-32, HDL-64]
    "},{"location":"how-to-guides/integrating-autoware/creating-maps/open-source-slam/optimized-sc-f-loam/#ros-compatibility","title":"ROS Compatibility","text":"
    • ROS 1
    "},{"location":"how-to-guides/integrating-autoware/creating-maps/open-source-slam/optimized-sc-f-loam/#dependencies","title":"Dependencies","text":"
    • ROS
    • PCL
    • GTSAM
    • Ceres Solver
    • For visualization purpose, this package uses hector trajectory sever, you may install the package by
    sudo apt-get install ros-noetic-hector-trajectory-server\n
    "},{"location":"how-to-guides/integrating-autoware/creating-maps/open-source-slam/optimized-sc-f-loam/#build-run","title":"Build & Run","text":""},{"location":"how-to-guides/integrating-autoware/creating-maps/open-source-slam/optimized-sc-f-loam/#1-build","title":"1) Build","text":"
    cd ~/catkin_ws/src\ngit clone https://github.com/SlamCabbage/Optimized-SC-F-LOAM.git\ncd ..\ncatkin_make\n
    "},{"location":"how-to-guides/integrating-autoware/creating-maps/open-source-slam/optimized-sc-f-loam/#2-create-message-file","title":"2) Create message file","text":"

    In this folder, Ground Truth information, optimized pose information, F-LOAM pose information and time information are stored

    mkdir -p ~/message/Scans\n\nChange line 383 in the laserLoopOptimizationNode.cpp to your own \"message\" folder path\n

    (Do not forget to rebuild your package)

    "},{"location":"how-to-guides/integrating-autoware/creating-maps/open-source-slam/optimized-sc-f-loam/#3-set-parameters","title":"3) Set parameters","text":"
    • Set LIDAR topic and LIDAR properties on 'sc_f_loam_mapping.launch'
    "},{"location":"how-to-guides/integrating-autoware/creating-maps/open-source-slam/optimized-sc-f-loam/#4-run","title":"4) Run","text":"
    source devel/setup.bash\nroslaunch optimized_sc_f_loam optimized_sc_f_loam_mapping.launch\n
    "},{"location":"how-to-guides/integrating-autoware/creating-maps/open-source-slam/optimized-sc-f-loam/#example-result","title":"Example Result","text":""},{"location":"how-to-guides/integrating-autoware/creating-maps/open-source-slam/optimized-sc-f-loam/#results-on-kitti-sequence-00-and-sequence-05","title":"Results on KITTI Sequence 00 and Sequence 05","text":""},{"location":"how-to-guides/integrating-autoware/creating-maps/open-source-slam/optimized-sc-f-loam/#comparison-of-trajectories-on-kitti-dataset","title":"Comparison of trajectories on KITTI dataset","text":"

    Test on KITTI sequence You can download the sequence 00 and 05 datasets from the KITTI official website and convert them into bag files using the kitti2bag open source method.

    00: 2011_10_03_drive_0027 000000 004540

    05: 2011_09_30_drive_0018 000000 002760

    See the link: https://github.com/ethz-asl/kitti_to_rosbag

    "},{"location":"how-to-guides/integrating-autoware/creating-maps/open-source-slam/optimized-sc-f-loam/#acknowledgements","title":"Acknowledgements","text":"

    Thanks for SC-A-LOAM(Scan context: Egocentric spatial descriptor for place recognition within 3d point cloud map) and F-LOAM(F-LOAM : Fast LiDAR Odometry and Mapping).

    "},{"location":"how-to-guides/integrating-autoware/creating-maps/open-source-slam/optimized-sc-f-loam/#citation","title":"Citation","text":"
    @misc{https://doi.org/10.48550/arxiv.2204.04932,\n  doi = {10.48550/ARXIV.2204.04932},\n\n  url = {https://arxiv.org/abs/2204.04932},\n\n  author = {Liao, Lizhou and Fu, Chunyun and Feng, Binbin and Su, Tian},\n\n  keywords = {Robotics (cs.RO), FOS: Computer and information sciences, FOS: Computer and information sciences},\n\n  title = {Optimized SC-F-LOAM: Optimized Fast LiDAR Odometry and Mapping Using Scan Context},\n\n  publisher = {arXiv},\n\n  year = {2022},\n\n  copyright = {arXiv.org perpetual, non-exclusive license}\n}\n
    "},{"location":"how-to-guides/integrating-autoware/creating-maps/open-source-slam/sc-a-loam/","title":"SC-A-LOAM","text":""},{"location":"how-to-guides/integrating-autoware/creating-maps/open-source-slam/sc-a-loam/#sc-a-loam","title":"SC-A-LOAM","text":""},{"location":"how-to-guides/integrating-autoware/creating-maps/open-source-slam/sc-a-loam/#what-is-sc-a-loam","title":"What is SC-A-LOAM?","text":"
    • A real-time LiDAR SLAM package that integrates A-LOAM and ScanContext.
    "},{"location":"how-to-guides/integrating-autoware/creating-maps/open-source-slam/sc-a-loam/#repository-information","title":"Repository Information","text":""},{"location":"how-to-guides/integrating-autoware/creating-maps/open-source-slam/sc-a-loam/#original-repository-link","title":"Original Repository link","text":"

    https://github.com/gisbi-kim/SC-A-LOAM

    "},{"location":"how-to-guides/integrating-autoware/creating-maps/open-source-slam/sc-a-loam/#required-sensors","title":"Required Sensors","text":"
    • LIDAR [VLP-16, HDL-32, HDL-64, Ouster OS1-64]
    "},{"location":"how-to-guides/integrating-autoware/creating-maps/open-source-slam/sc-a-loam/#prerequisites-dependencies","title":"Prerequisites (dependencies)","text":"
    • ROS
    • GTSAM version 4.x.
    • If GTSAM is not installed, follow the steps below.

        wget -O ~/Downloads/gtsam.zip https://github.com/borglab/gtsam/archive/4.0.2.zip\n  cd ~/Downloads/ && unzip gtsam.zip -d ~/Downloads/\n  cd ~/Downloads/gtsam-4.0.2/\n  mkdir build && cd build\n  cmake -DGTSAM_BUILD_WITH_MARCH_NATIVE=OFF ..\n  sudo make install -j8\n
    "},{"location":"how-to-guides/integrating-autoware/creating-maps/open-source-slam/sc-a-loam/#ros-compatibility","title":"ROS Compatibility","text":"
    • ROS 1
    "},{"location":"how-to-guides/integrating-autoware/creating-maps/open-source-slam/sc-a-loam/#build-run","title":"Build & Run","text":""},{"location":"how-to-guides/integrating-autoware/creating-maps/open-source-slam/sc-a-loam/#1-build","title":"1) Build","text":"
    • First, install the above mentioned dependencies and follow below lines.

       mkdir -p ~/catkin_scaloam_ws/src\n cd ~/catkin_scaloam_ws/src\n git clone https://github.com/gisbi-kim/SC-A-LOAM.git\n cd ../\n catkin_make\n source ~/catkin_scaloam_ws/devel/setup.bash\n
    "},{"location":"how-to-guides/integrating-autoware/creating-maps/open-source-slam/sc-a-loam/#2-set-parameters","title":"2) Set parameters","text":"
    • After downloading the repository, change topic and sensor settings on the launch files.
    "},{"location":"how-to-guides/integrating-autoware/creating-maps/open-source-slam/sc-a-loam/#scan-context-parameters","title":"Scan Context parameters","text":"
    • If encountering ghosting error or loop is not closed, change the scan context parameters.
    • Adjust the scan context settings with the parameters in the marked area.
    "},{"location":"how-to-guides/integrating-autoware/creating-maps/open-source-slam/sc-a-loam/#3-run","title":"3) Run","text":"
    roslaunch aloam_velodyne aloam_mulran.launch\n
    "},{"location":"how-to-guides/integrating-autoware/creating-maps/open-source-slam/sc-a-loam/#4-saving-as-pcd-file","title":"4) Saving as PCD file","text":"
      rosrun pcl_ros pointcloud_to_pcd input:=/aft_pgo_map\n
    "},{"location":"how-to-guides/integrating-autoware/creating-maps/open-source-slam/sc-a-loam/#example-results","title":"Example Results","text":""},{"location":"how-to-guides/integrating-autoware/creating-maps/open-source-slam/sc-a-loam/#riverside-01-mulran-dataset","title":"Riverside 01, MulRan dataset","text":"
    • The MulRan dataset provides lidar scans (Ouster OS1-64, horizontally mounted, 10Hz) and consumer level gps (u-blox EVK-7P, 4Hz) data.
    • About how to use (publishing data) data: see here https://github.com/irapkaist/file_player_mulran
    • example videos on Riverside 01 sequence.

      1. with consumer level GPS-based altitude stabilization: https://youtu.be/FwAVX5TVm04\n2. without the z stabilization: https://youtu.be/okML_zNadhY\n
    • example result:

    "},{"location":"how-to-guides/integrating-autoware/creating-maps/open-source-slam/sc-a-loam/#kitti-05","title":"KITTI 05","text":"
    • For KITTI (HDL-64 sensor), run using the command

      roslaunch aloam_velodyne aloam_velodyne_HDL_64.launch # for KITTI dataset setting\n
    • To publish KITTI scans, you can use mini-kitti publisher, a simple python script: https://github.com/gisbi-kim/mini-kitti-publisher
    • example video (no GPS used here): https://youtu.be/hk3Xx8SKkv4
    • example result:

    "},{"location":"how-to-guides/integrating-autoware/creating-maps/open-source-slam/sc-a-loam/#contact","title":"Contact","text":"
    • Maintainer: paulgkim@kaist.ac.kr
    "},{"location":"how-to-guides/integrating-autoware/creating-maps/open-source-slam/sc-lego-loam/","title":"SC-LeGO-LOAM","text":""},{"location":"how-to-guides/integrating-autoware/creating-maps/open-source-slam/sc-lego-loam/#sc-lego-loam","title":"SC-LeGO-LOAM","text":""},{"location":"how-to-guides/integrating-autoware/creating-maps/open-source-slam/sc-lego-loam/#what-is-sc-lego-loam","title":"What is SC-LeGO-LOAM?","text":"
    • SC-LeGO-LOAM integrated LeGO-LOAM for lidar odometry and 2 different loop closure methods: ScanContext and Radius search based loop closure. While ScanContext is correcting large drifts, radius search based method is good for fine-stitching.
    "},{"location":"how-to-guides/integrating-autoware/creating-maps/open-source-slam/sc-lego-loam/#repository-information","title":"Repository Information","text":""},{"location":"how-to-guides/integrating-autoware/creating-maps/open-source-slam/sc-lego-loam/#original-repository-link","title":"Original Repository link","text":"

    https://github.com/irapkaist/SC-LeGO-LOAM

    "},{"location":"how-to-guides/integrating-autoware/creating-maps/open-source-slam/sc-lego-loam/#required-sensors","title":"Required Sensors","text":"
    • LIDAR [VLP-16, HDL-32E, VLS-128, Ouster OS1-16, Ouster OS1-64]
    • IMU [9-AXIS]
    "},{"location":"how-to-guides/integrating-autoware/creating-maps/open-source-slam/sc-lego-loam/#ros-compatibility","title":"ROS Compatibility","text":"
    • ROS 1
    "},{"location":"how-to-guides/integrating-autoware/creating-maps/open-source-slam/sc-lego-loam/#dependencies","title":"Dependencies","text":"
    • ROS
    • PCL
    • GTSAM
    wget -O ~/Downloads/gtsam.zip https://github.com/borglab/gtsam/archive/4.0.0-alpha2.zip\ncd ~/Downloads/ && unzip gtsam.zip -d ~/Downloads/\ncd ~/Downloads/gtsam-4.0.0-alpha2/\nmkdir build && cd build\ncmake ..\nsudo make install\n
    "},{"location":"how-to-guides/integrating-autoware/creating-maps/open-source-slam/sc-lego-loam/#build-run","title":"Build & Run","text":""},{"location":"how-to-guides/integrating-autoware/creating-maps/open-source-slam/sc-lego-loam/#1-build","title":"1) Build","text":"
    cd ~/catkin_ws/src\ngit clone https://github.com/irapkaist/SC-LeGO-LOAM.git\ncd ..\ncatkin_make\n
    "},{"location":"how-to-guides/integrating-autoware/creating-maps/open-source-slam/sc-lego-loam/#2-set-parameters","title":"2) Set parameters","text":"
    • Set imu and lidar topic on include/utility.h
    • Set lidar properties on include/utility.h
    • Set scan context settings on include/Scancontext.h

    (Do not forget to rebuild after setting parameters.)

    "},{"location":"how-to-guides/integrating-autoware/creating-maps/open-source-slam/sc-lego-loam/#3-run","title":"3) Run","text":"
    source devel/setup.bash\nroslaunch lego_loam run.launch\n
    "},{"location":"how-to-guides/integrating-autoware/creating-maps/open-source-slam/sc-lego-loam/#example-result","title":"Example Result","text":""},{"location":"how-to-guides/integrating-autoware/creating-maps/open-source-slam/sc-lego-loam/#other-examples","title":"Other Examples","text":"
    • Video 1: DCC (MulRan dataset)
    • Video 2: Riverside (MulRan dataset)
    • Video 3: KAIST (MulRan dataset)

    "},{"location":"how-to-guides/integrating-autoware/creating-maps/open-source-slam/sc-lego-loam/#mulran-dataset","title":"MulRan dataset","text":"
    • If you want to reproduce the results as the above video, you can download the MulRan dataset and use the ROS topic publishing tool .
    "},{"location":"how-to-guides/integrating-autoware/creating-maps/open-source-slam/sc-lego-loam/#cite-sc-lego-loam","title":"Cite SC-LeGO-LOAM","text":"
    @INPROCEEDINGS { gkim-2018-iros,\n  author = {Kim, Giseop and Kim, Ayoung},\n  title = { Scan Context: Egocentric Spatial Descriptor for Place Recognition within {3D} Point Cloud Map },\n  booktitle = { Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems },\n  year = { 2018 },\n  month = { Oct. },\n  address = { Madrid }\n}\n

    and

    @inproceedings{legoloam2018,\n  title={LeGO-LOAM: Lightweight and Ground-Optimized Lidar Odometry and Mapping on Variable Terrain},\n  author={Shan, Tixiao and Englot, Brendan},\n  booktitle={IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)},\n  pages={4758-4765},\n  year={2018},\n  organization={IEEE}\n}\n
    "},{"location":"how-to-guides/integrating-autoware/creating-maps/open-source-slam/sc-lego-loam/#contact","title":"Contact","text":"
    • Maintainer: Giseop Kim (paulgkim@kaist.ac.kr)
    "},{"location":"how-to-guides/integrating-autoware/creating-maps/pointcloud-map-downsampling/","title":"Pointcloud map downsampling","text":""},{"location":"how-to-guides/integrating-autoware/creating-maps/pointcloud-map-downsampling/#pointcloud-map-downsampling","title":"Pointcloud map downsampling","text":""},{"location":"how-to-guides/integrating-autoware/creating-maps/pointcloud-map-downsampling/#overview","title":"Overview","text":"

    When your created point cloud map is either too dense or too large (i.e., exceeding 300 MB), you may want to downsample it for improved computational and memory efficiency. Also, you can consider using dynamic map loading with partial loading, please check autoware_map_loader package for more information.

    At tutorial_vehicle implementation we will use the whole map, so we will downsample it with using CloudCompare.

    "},{"location":"how-to-guides/integrating-autoware/creating-maps/pointcloud-map-downsampling/#installing-cloudcompare","title":"Installing CloudCompare","text":"

    You can install it by snap:

    sudo snap install cloudcompare\n

    Please check the official page for installing options.

    "},{"location":"how-to-guides/integrating-autoware/creating-maps/pointcloud-map-downsampling/#downsampling-a-pointcloud-map","title":"Downsampling a pointcloud map","text":"

    There are three subsampling methods on CloudCompare, we are using Space method for subsampling, but you can use other methods if you want.

    1. Please open CloudCompare and drag your pointcloud to here, then you can select your pointcloud map by just clicking on the map at the DB tree panel.
    2. Then you can click subsample button on the top panel.

    CloudCompare
    1. Please select on your subsample method, we will use space for tutorial_vehicle.
    2. Then you can select options. For example, we need to determine minimum space between points. (Please be careful in this section, subsampling is depending on your map size, computer performance, etc.) We will set this value 0.2 for tutorial_vehicle's map.

    Pointcloud subsampling
    • After the subsampling process is finished, you should select pointcloud on the DB Tree panel as well.

    Select your downsampled pointcloud

    Now, you can save your downsampled pointcloud with ctrl + s or you can click save button from File bar. Then, this pointcloud can be used by autoware.

    "},{"location":"how-to-guides/integrating-autoware/creating-vehicle-and-sensor-model/","title":"Creating vehicle and sensor models","text":""},{"location":"how-to-guides/integrating-autoware/creating-vehicle-and-sensor-model/#creating-vehicle-and-sensor-models","title":"Creating vehicle and sensor models","text":""},{"location":"how-to-guides/integrating-autoware/creating-vehicle-and-sensor-model/#overview","title":"Overview","text":""},{"location":"how-to-guides/integrating-autoware/creating-vehicle-and-sensor-model/#sensor-model","title":"Sensor Model","text":"
    • Purpose: The sensor model includes the calibration (transformation) and launch files of the sensors used in the autonomous vehicle. This includes various sensors like LiDARs, cameras, radars, IMUs (Inertial Measurement Units), GPS units, etc.
    • Importance: Accurate sensor modeling is essential for perception tasks. Precise calibration values help understand the environment by processing sensor data, such as detecting objects, estimating distances, and creating a 3D representation of the surroundings.
    • Usage: The sensor model is utilized in Autoware for launching sensors, configuring their pipeline, and describing calibration values.
    • The sensor model (sensor kit) consists of the following three packages:
      • common_sensor_launch
      • <YOUR_VEHICLE_NAME>_sensor_kit_description
      • <YOUR_VEHICLE_NAME>_sensor_kit_launch

    Please refer to the creating sensor model page for creating your individual sensor model.

    For reference, here is the folder structure for the sample_sensor_kit_launch package in Autoware:

    sample_sensor_kit_launch/\n\u251c\u2500 common_sensor_launch/\n\u251c\u2500 sample_sensor_kit_description/\n\u2514\u2500 sample_sensor_kit_launch/\n
    "},{"location":"how-to-guides/integrating-autoware/creating-vehicle-and-sensor-model/#vehicle-model","title":"Vehicle Model","text":"
    • Purpose: The vehicle model includes individual vehicle specifications with dimensions, a 3D model of the vehicle (in .fbx or .dae format), etc.
    • Importance: An accurate vehicle model is crucial for motion planning and control.
    • Usage: The vehicle model is employed in Autoware to provide vehicle information for Autoware, including the 3D model of the vehicle.
    • The vehicle model comprises the following two packages:
      • <YOUR_VEHICLE_NAME>_vehicle_description
      • <YOUR_VEHICLE_NAME>_vehicle_launch

    Please consult the creating vehicle model page for creating your individual vehicle model.

    As a reference, here is the folder structure for the sample_vehicle_launch package in Autoware:

    sample_vehicle_launch/\n\u251c\u2500 sample_vehicle_description/\n\u2514\u2500 sample_vehicle_launch/\n
    "},{"location":"how-to-guides/integrating-autoware/creating-vehicle-and-sensor-model/calibrating-sensors/","title":"Calibrating your sensors","text":""},{"location":"how-to-guides/integrating-autoware/creating-vehicle-and-sensor-model/calibrating-sensors/#calibrating-your-sensors","title":"Calibrating your sensors","text":""},{"location":"how-to-guides/integrating-autoware/creating-vehicle-and-sensor-model/calibrating-sensors/#overview","title":"Overview","text":"

    Autoware expects to have multiple sensors attached to the vehicle as input to perception, localization, and planning stack. Autoware uses fusion techniques to combine information from multiple sensors. For this to work effectively, all sensors must be calibrated properly to align their coordinate systems, and their positions must be defined using either urdf files (as in sample_sensor_kit) or as tf launch files. In this documentation, we will explain TIER IV's CalibrationTools repository for the calibration process. Please look at Starting with TIER IV's CalibrationTools page for installation and usage of this tool.

    If you want to look at other calibration packages and methods, you can check out the following packages.

    "},{"location":"how-to-guides/integrating-autoware/creating-vehicle-and-sensor-model/calibrating-sensors/#other-packages-you-can-check-out","title":"Other packages you can check out","text":""},{"location":"how-to-guides/integrating-autoware/creating-vehicle-and-sensor-model/calibrating-sensors/#camera-calibration","title":"Camera calibration","text":""},{"location":"how-to-guides/integrating-autoware/creating-vehicle-and-sensor-model/calibrating-sensors/#intrinsic-calibration","title":"Intrinsic Calibration","text":"
    • Navigation2 provides a good tutorial for camera internal calibration.
    • AutoCore provides a light-weight tool.
    "},{"location":"how-to-guides/integrating-autoware/creating-vehicle-and-sensor-model/calibrating-sensors/#lidar-lidar-calibration","title":"Lidar-lidar calibration","text":""},{"location":"how-to-guides/integrating-autoware/creating-vehicle-and-sensor-model/calibrating-sensors/#lidar-lidar-calibration-tool-from-autocore","title":"Lidar-Lidar Calibration tool from Autocore","text":"

    LL-Calib on GitHub, provided by AutoCore, is a lightweight toolkit for online/offline 3D LiDAR to LiDAR calibration. It's based on local mapping and \"GICP\" method to derive the relation between main and sub lidar. Information on how to use the tool, troubleshooting tips and example rosbags can be found at the above link.

    "},{"location":"how-to-guides/integrating-autoware/creating-vehicle-and-sensor-model/calibrating-sensors/#lidar-camera-calibration","title":"Lidar-camera calibration","text":"

    Developed by MathWorks, The Lidar Camera Calibrator app enables you to interactively estimate the rigid transformation between a lidar sensor and a camera.

    https://ww2.mathworks.cn/help/lidar/ug/get-started-lidar-camera-calibrator.html

    SensorsCalibration toolbox v0.1: One more open source method for Lidar-camera calibration. This is a project for LiDAR to camera calibration,including automatic calibration and manual calibration

    https://github.com/PJLab-ADG/SensorsCalibration/blob/master/lidar2camera/README.md

    Developed by AutoCore, an easy-to-use lightweight toolkit for Lidar-camera-calibration is proposed. Only in three steps, a fully automatic calibration will be done.

    https://github.com/autocore-ai/calibration_tools/tree/main/lidar-cam-calib-related

    "},{"location":"how-to-guides/integrating-autoware/creating-vehicle-and-sensor-model/calibrating-sensors/#lidar-imu-calibration","title":"Lidar-IMU calibration","text":"

    Developed by APRIL Lab at Zhejiang University in China, the LI-Calib calibration tool is a toolkit for calibrating the 6DoF rigid transformation and the time offset between a 3D LiDAR and an IMU, based on continuous-time batch optimization. IMU-based cost and LiDAR point-to-surfel (surfel = surface element) distance are minimized jointly, which renders the calibration problem well-constrained in general scenarios.

    AutoCore has forked the original LI-Calib tool and overwritten the Lidar input for more general usage. Information on how to use the tool, troubleshooting tips and example rosbags can be found at the LI-Calib fork on GitHub.

    "},{"location":"how-to-guides/integrating-autoware/creating-vehicle-and-sensor-model/calibrating-sensors/calibration-tools/","title":"Starting with TIER IV's CalibrationTools","text":""},{"location":"how-to-guides/integrating-autoware/creating-vehicle-and-sensor-model/calibrating-sensors/calibration-tools/#starting-with-tier-ivs-calibrationtools","title":"Starting with TIER IV's CalibrationTools","text":""},{"location":"how-to-guides/integrating-autoware/creating-vehicle-and-sensor-model/calibrating-sensors/calibration-tools/#overview","title":"Overview","text":"

    Autoware expects to have multiple sensors attached to the vehicle as input to perception, localization, and planning stack. These sensors must be calibrated correctly, and their positions must be defined at sensor_kit_description and individual_params packages. In this tutorial, we will use TIER IV's CalibrationTools repository for the calibration.

    "},{"location":"how-to-guides/integrating-autoware/creating-vehicle-and-sensor-model/calibrating-sensors/calibration-tools/#setting-of-sensor_kit_base_link-position-with-respect-to-the-base_link","title":"Setting of sensor_kit_base_link position with respect to the base_link","text":"

    In previous section (creating the vehicle and sensor model), we mentioned about sensors_calibration.yaml. This file stores sensor_kit_base_link (child frame) position and orientation with respect to the base_link (parent frame). We need to update this relative position (all values were initially set equal to zero when file is created) with using CAD data of our vehicle.

    Our tutorial_vehicle base_link to sensor_kit_base_link transformation.

    So, our sensors_calibration.yaml file for our tutorial_vehicle should be like this:

    base_link:\nsensor_kit_base_link:\nx: 1.600000 # meter\ny: 0.0\nz: 1.421595 # 1.151595m + 0.270m\nroll: 0.0\npitch: 0.0\nyaw: 0.0\n

    You need to update this transformation value with respect to the sensor_kit_base_link frame. You can also use CAD values for GNSS/INS and IMU position in sensor_kit_calibration.yaml file. (Please don't forget to update the sensor_kit_calibration.yaml file in both the sensor_kit_launch and individual_params packages)

    "},{"location":"how-to-guides/integrating-autoware/creating-vehicle-and-sensor-model/calibrating-sensors/calibration-tools/#installing-tier-ivs-calibrationtools-repositories-on-autoware","title":"Installing TIER IV's CalibrationTools repositories on autoware","text":"

    After completing previous steps (creating your own autoware, creating a vehicle and sensor model etc.) we are ready to calibrate sensors which prepared their pipeline in creating the sensor model section.

    Firstly, we will clone CalibrationTools repositories in own autoware.

    cd <YOUR-OWN-AUTOWARE-DIRECTORY> # for example: cd autoware.tutorial_vehicle\nwget https://raw.githubusercontent.com/tier4/CalibrationTools/tier4/universe/calibration_tools.repos\nvcs import src < calibration_tools.repos\nrosdep install -y --from-paths src --ignore-src --rosdistro $ROS_DISTRO\n

    Then build the all packages after the all necessary changes are made on sensor model and vehicle model.

    colcon build --symlink-install --cmake-args -DCMAKE_BUILD_TYPE=Release\n
    "},{"location":"how-to-guides/integrating-autoware/creating-vehicle-and-sensor-model/calibrating-sensors/calibration-tools/#usage-of-calibrationtools","title":"Usage of CalibrationTools","text":"

    The CalibrationTools repository has several packages for calibrating different sensor pairs such as lidar-lidar, camera-lidar, ground-lidar etc. In order to calibrate our sensors, we will modify extrinsic_calibration_package for our sensor kit.

    For tutorial_vehicle, completed launch files when created following tutorial sections can be found here.

    • Manual Calibration
    • Lidar-Lidar Calibration
      • Ground Plane-Lidar Calibration
    • Intrinsic Camera Calibration
    • Lidar-Camera Calibration
    • Lidar-Imu Calibration
    "},{"location":"how-to-guides/integrating-autoware/creating-vehicle-and-sensor-model/calibrating-sensors/extrinsic-manual-calibration/","title":"Manual calibration for all sensors","text":""},{"location":"how-to-guides/integrating-autoware/creating-vehicle-and-sensor-model/calibrating-sensors/extrinsic-manual-calibration/#manual-calibration-for-all-sensors","title":"Manual calibration for all sensors","text":""},{"location":"how-to-guides/integrating-autoware/creating-vehicle-and-sensor-model/calibrating-sensors/extrinsic-manual-calibration/#overview","title":"Overview","text":"

    In this section, we will use Extrinsic Manual Calibration for extrinsic calibration of our sensors. After this process, we won't get accurate calibration results for final use, but we will have an initial calibration for other tools. For example, in the lidar-lidar or camera-lidar calibration phase, we will need initial calibration for getting accurate and successful calibration results.

    We need a sample bag file for the calibration process which includes raw lidar topics and camera topics. The following shows an example of a bag file used for calibration:

    ROS 2 Bag example of our calibration process
    Files:             rosbag2_2023_09_06-13_43_54_0.db3\nBag size:          18.3 GiB\nStorage id:        sqlite3\nDuration:          169.12s\nStart:             Sep  6 2023 13:43:54.902 (1693997034.902)\nEnd:               Sep  6 2023 13:46:43.914 (1693997203.914)\nMessages:          8504\nTopic information: Topic: /sensing/lidar/top/pointcloud_raw | Type: sensor_msgs/msg/PointCloud2 | Count: 1691 | Serialization Format: cdr\n                   Topic: /sensing/lidar/front/pointcloud_raw | Type: sensor_msgs/msg/PointCloud2 | Count: 1691 | Serialization Format: cdr\n                   Topic: /sensing/camera/camera0/image_rect | Type: sensor_msgs/msg/Image | Count: 2561 | Serialization Format: cdr\n                   Topic: /sensing/camera/camera0/camera_info | Type: sensor_msgs/msg/CameraInfo | Count: 2561 | Serialization Format: cdr\n
    "},{"location":"how-to-guides/integrating-autoware/creating-vehicle-and-sensor-model/calibrating-sensors/extrinsic-manual-calibration/#extrinsic-manual-based-calibration","title":"Extrinsic Manual-Based Calibration","text":""},{"location":"how-to-guides/integrating-autoware/creating-vehicle-and-sensor-model/calibrating-sensors/extrinsic-manual-calibration/#creating-launch-files","title":"Creating launch files","text":"

    First of all, we will start with creating launch file for extrinsic_calibration_manager package:

    cd <YOUR-OWN-AUTOWARE-DIRECTORY>/src/autoware/calibration_tools/sensor\ncd extrinsic_calibration_manager/launch\nmkdir <YOUR-OWN-SENSOR-KIT-NAME> # i.e. for our guide, it will ve mkdir tutorial_vehicle_sensor_kit\ncd <YOUR-OWN-SENSOR-KIT-NAME> # i.e. for our guide, it will ve cd tutorial_vehicle_sensor_kit\ntouch manual.launch.xml manual_sensor_kit.launch.xml manual_sensors.launch.xml\n

    We will be modifying these manual.launch.xml, manual_sensors.launch.xml and manual_sensor_kit.launch.xml by using TIER IV's sample sensor kit aip_x1. So, you should copy the contents of these three files from aip_x1 to your created files.

    "},{"location":"how-to-guides/integrating-autoware/creating-vehicle-and-sensor-model/calibrating-sensors/extrinsic-manual-calibration/#modifying-launch-files-according-to-your-sensor-kit","title":"Modifying launch files according to your sensor kit","text":"

    So, we can start modifying manual.launch.xml, please open this file on a text editor which will you prefer (code, gedit etc.).

    (Optionally) Let's start with adding vehicle_id and sensor model names: (Values are not important. These parameters will be overridden by launch arguments)

      <arg name=\"vehicle_id\" default=\"default\"/>\n\n  <let name=\"sensor_model\" value=\"aip_x1\"/>\n+ <?xml version=\"1.0\" encoding=\"UTF-8\"?>\n+ <launch>\n-   <arg name=\"vehicle_id\" default=\"default\"/>\n+   <arg name=\"vehicle_id\" default=\"<YOUR_VEHICLE_ID>\"/>\n+\n-   <arg name=\"sensor_model\" default=\"aip_x1\"/>\n+   <let name=\"sensor_model\" value=\"<YOUR_SENSOR_KIT_NAME>\"/>\n

    The final version of the file (manual.launch.xml) for tutorial_vehicle should be like this:

    Sample manual.launch.xml file for tutorial vehicle
    <?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<launch>\n<arg name=\"vehicle_id\" default=\"tutorial_vehicle\"/>\n\n<let name=\"sensor_model\" value=\"tutorial_vehicle_sensor_kit\"/>\n\n<group>\n<push-ros-namespace namespace=\"sensor_kit\"/>\n<include file=\"$(find-pkg-share extrinsic_calibration_manager)/launch/$(var sensor_model)/manual_sensor_kit.launch.xml\">\n<arg name=\"vehicle_id\" value=\"$(var vehicle_id)\"/>\n</include>\n</group>\n\n<group>\n<push-ros-namespace namespace=\"sensors\"/>\n<include file=\"$(find-pkg-share extrinsic_calibration_manager)/launch/$(var sensor_model)/manual_sensors.launch.xml\">\n<arg name=\"vehicle_id\" value=\"$(var vehicle_id)\"/>\n</include>\n</group>\n\n</launch>\n

    After the completing of manual.launch.xml file, we will be ready to implement manual_sensor_kit.launch.xml for the own sensor model's sensor_kit_calibration.yaml:

    Optionally, you can modify sensor_model and vehicle_id over this xml snippet as well:

    ...\n  <arg name=\"vehicle_id\" default=\"default\"/>\n\n  <let name=\"sensor_model\" value=\"aip_x1\"/>\n+ <?xml version=\"1.0\" encoding=\"UTF-8\"?>\n+ <launch>\n-   <arg name=\"vehicle_id\" default=\"default\"/>\n+   <arg name=\"vehicle_id\" default=\"<YOUR_VEHICLE_ID>\"/>\n+\n-   <arg name=\"sensor_model\" default=\"aip_x1\"/>\n+   <let name=\"sensor_model\" value=\"<YOUR_SENSOR_KIT_NAME>\"/>\n...\n

    Then, we will add all our sensor frames on extrinsic_calibration_manager as child frames:

       <!-- extrinsic_calibration_manager -->\n-  <node pkg=\"extrinsic_calibration_manager\" exec=\"extrinsic_calibration_manager\" name=\"extrinsic_calibration_manager\" output=\"screen\">\n-    <param name=\"parent_frame\" value=\"$(var parent_frame)\"/>\n-    <param name=\"child_frames\" value=\"\n-    [velodyne_top_base_link,\n-    livox_front_left_base_link,\n-    livox_front_center_base_link,\n-    livox_front_right_base_link]\"/>\n-  </node>\n+   <node pkg=\"extrinsic_calibration_manager\" exec=\"extrinsic_calibration_manager\" name=\"extrinsic_calibration_manager\" output=\"screen\">\n+     <param name=\"parent_frame\" value=\"$(var parent_frame)\"/>\n+     <!-- add your sensor frames here -->\n+     <param name=\"child_frames\" value=\"\n+     [<YOUE_SENSOR_BASE_LINK>,\n+     YOUE_SENSOR_BASE_LINK,\n+     YOUE_SENSOR_BASE_LINK,\n+     YOUE_SENSOR_BASE_LINK\n+     ...]\"/>\n+   </node>\n

    For tutorial_vehicle there are four sensors (two lidar, one camera, one gnss/ins), so it will be like this:

    i.e extrinsic_calibration_manager child_frames for tutorial_vehicle
    +   <!-- extrinsic_calibration_manager -->\n+   <node pkg=\"extrinsic_calibration_manager\" exec=\"extrinsic_calibration_manager\" name=\"extrinsic_calibration_manager\" output=\"screen\">\n+     <param name=\"parent_frame\" value=\"$(var parent_frame)\"/>\n+     <!-- add your sensor frames here -->\n+     <param name=\"child_frames\" value=\"\n+     [rs_helios_top_base_link,\n+     rs_bpearl_front_base_link,\n+     camera0/camera_link,\n+     gnss_link]\"/>\n+   </node>\n

    Lastly, we will launch a manual calibrator each frame for our sensors, please update namespace (ns) and child_frame argument on calibrator.launch.xml launch file argument:

    -  <include file=\"$(find-pkg-share extrinsic_manual_calibrator)/launch/calibrator.launch.xml\">\n-    <arg name=\"ns\" value=\"$(var parent_frame)/velodyne_top_base_link\"/>\n-    <arg name=\"parent_frame\" value=\"$(var parent_frame)\"/>\n-    <arg name=\"child_frame\" value=\"velodyne_top_base_link\"/>\n-  </include>\n+  <!-- extrinsic_manual_calibrator -->\n+  <include file=\"$(find-pkg-share extrinsic_manual_calibrator)/launch/calibrator.launch.xml\">\n+    <arg name=\"ns\" value=\"$(var parent_frame)/<YOUR_SENSOR_BASE_LINK>\"/>\n+    <arg name=\"parent_frame\" value=\"$(var parent_frame)\"/>\n+    <arg name=\"child_frame\" value=\"<YOUR_SENSOR_BASE_LINK>\"\"/>\n+  </include>\n+\n+  ...\n+  ...\n+  ...\n+  ...\n+  ...\n+\n
    i.e., calibrator.launch.xml for each tutorial_vehicle's sensor kit
    +  <!-- extrinsic_manual_calibrator -->\n+  <include file=\"$(find-pkg-share extrinsic_manual_calibrator)/launch/calibrator.launch.xml\">\n+    <arg name=\"ns\" value=\"$(var parent_frame)/rs_helios_top_base_link\"/>\n+    <arg name=\"parent_frame\" value=\"$(var parent_frame)\"/>\n+    <arg name=\"child_frame\" value=\"rs_helios_top_base_link\"/>\n+  </include>\n+\n+  <include file=\"$(find-pkg-share extrinsic_manual_calibrator)/launch/calibrator.launch.xml\">\n+    <arg name=\"ns\" value=\"$(var parent_frame)/rs_bpearl_front_base_link\"/>\n+    <arg name=\"parent_frame\" value=\"$(var parent_frame)\"/>\n+    <arg name=\"child_frame\" value=\"rs_bpearl_front_base_link\"/>\n+  </include>\n+\n+  <include file=\"$(find-pkg-share extrinsic_manual_calibrator)/launch/calibrator.launch.xml\">\n+    <arg name=\"ns\" value=\"$(var parent_frame)/camera0/camera_link\"/>\n+    <arg name=\"parent_frame\" value=\"$(var parent_frame)\"/>\n+    <arg name=\"child_frame\" value=\"camera0/camera_link\"/>\n+  </include>\n+\n+  <include file=\"$(find-pkg-share extrinsic_manual_calibrator)/launch/calibrator.launch.xml\">\n+    <arg name=\"ns\" value=\"$(var parent_frame)/gnss_link\"/>\n+    <arg name=\"parent_frame\" value=\"$(var parent_frame)\"/>\n+    <arg name=\"child_frame\" value=\"gnss_link\"/>\n+  </include>\n+ </launch>\n

    The final version of the manual_sensor_kit.launch.xml for tutorial_vehicle should be like this:

    Sample manual_sensor_kit.launch.xml for tutorial_vehicle
    <?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<launch>\n<arg name=\"vehicle_id\" default=\"tutorial_vehicle\"/> <!-- You can update with your own vehicle_id -->\n\n<let name=\"sensor_model\" value=\"tutorial_vehicle_sensor_kit\"/> <!-- You can update with your own sensor model -->\n<let name=\"parent_frame\" value=\"sensor_kit_base_link\"/>\n\n<!-- extrinsic_calibration_client -->\n<arg name=\"src_yaml\" default=\"$(find-pkg-share individual_params)/config/$(var vehicle_id)/$(var sensor_model)/sensor_kit_calibration.yaml\"/>\n<arg name=\"dst_yaml\" default=\"$(env HOME)/sensor_kit_calibration.yaml\"/>\n\n<node pkg=\"extrinsic_calibration_client\" exec=\"extrinsic_calibration_client\" name=\"extrinsic_calibration_client\" output=\"screen\">\n<param name=\"src_path\" value=\"$(var src_yaml)\"/>\n<param name=\"dst_path\" value=\"$(var dst_yaml)\"/>\n</node>\n\n<!-- extrinsic_calibration_manager -->\n<node pkg=\"extrinsic_calibration_manager\" exec=\"extrinsic_calibration_manager\" name=\"extrinsic_calibration_manager\" output=\"screen\">\n<param name=\"parent_frame\" value=\"$(var parent_frame)\"/>\n<!-- Please Update with your own sensor frames -->\n<param name=\"child_frames\" value=\"\n    [rs_helios_top_base_link,\n    rs_bpearl_front_base_link,\n    camera0/camera_link,\n    gnss_link]\"/>\n</node>\n\n<!-- extrinsic_manual_calibrator -->\n<!-- Please create a launch for all sensors that you used. -->\n<include file=\"$(find-pkg-share extrinsic_manual_calibrator)/launch/calibrator.launch.xml\">\n<arg name=\"ns\" value=\"$(var parent_frame)/rs_helios_top_base_link\"/>\n<arg name=\"parent_frame\" value=\"$(var parent_frame)\"/>\n<arg name=\"child_frame\" value=\"rs_helios_top_base_link\"/>\n</include>\n\n<include file=\"$(find-pkg-share extrinsic_manual_calibrator)/launch/calibrator.launch.xml\">\n<arg name=\"ns\" value=\"$(var parent_frame)/rs_bpearl_front_base_link\"/>\n<arg name=\"parent_frame\" value=\"$(var parent_frame)\"/>\n<arg name=\"child_frame\" value=\"rs_bpearl_front_base_link\"/>\n</include>\n\n<include file=\"$(find-pkg-share extrinsic_manual_calibrator)/launch/calibrator.launch.xml\">\n<arg name=\"ns\" value=\"$(var parent_frame)/camera0/camera_link\"/>\n<arg name=\"parent_frame\" value=\"$(var parent_frame)\"/>\n<arg name=\"child_frame\" value=\"camera0/camera_link\"/>\n</include>\n\n<include file=\"$(find-pkg-share extrinsic_manual_calibrator)/launch/calibrator.launch.xml\">\n<arg name=\"ns\" value=\"$(var parent_frame)/gnss_link\"/>\n<arg name=\"parent_frame\" value=\"$(var parent_frame)\"/>\n<arg name=\"child_frame\" value=\"gnss_link\"/>\n</include>\n</launch>\n

    You can update manual_sensors.launch.xml file according to your modified sensors_calibration.yaml file. Since we will not be calibrating the sensor directly with respect to the base_link in tutorial_vehicle, we will not change this file.

    "},{"location":"how-to-guides/integrating-autoware/creating-vehicle-and-sensor-model/calibrating-sensors/extrinsic-manual-calibration/#calibrating-sensors-with-extrinsic-manual-calibrator","title":"Calibrating sensors with extrinsic manual calibrator","text":"

    After the completion of manual.launch.xml and manual_sensor_kit.launch xml file for extrinsic_calibration_manager package, we need to build package:

    colcon build --symlink-install --cmake-args -DCMAKE_BUILD_TYPE=Release --packages-select extrinsic_calibration_manager\n

    So, we are ready to launch and use manual calibrator:

    ros2 launch extrinsic_calibration_manager calibration.launch.xml mode:=manual sensor_model:=<OWN-SENSOR-KIT> vehicle_model:=<OWN-VEHICLE-MODEL> vehicle_id:=<VEHICLE-ID>\n

    For tutorial vehicle:

    ros2 launch extrinsic_calibration_manager calibration.launch.xml mode:=manual sensor_model:=tutorial_vehicle_sensor_kit vehicle_model:=tutorial_vehicle vehicle_id:=tutorial_vehicle\n

    Then play ROS 2 bag file:

    ros2 bag play <rosbag_path> --clock -l -r 0.2 \\\n--remap /tf:=/null/tf /tf_static:=/null/tf_static # if tf is recorded\n

    You will show to a manual rqt_reconfigure window, we will update calibrations by hand according to the rviz2 results of sensors.

    • Press Refresh button then press Expand All button. The frames on tutorial_vehicle should like this:

    • Please write the target frame name in Filter area (i.e., front, helios etc.) and select tunable_static_tf_broadcaster_node, then you can adjust tf_x, tf_y, tf_z, tf_roll, tf_pitch and tf_yaw values over RQT panel.
    • If manual adjusting is finished, you can save your calibration results via this command:
    ros2 topic pub /done std_msgs/Bool \"data: true\"\n
    • Then you can check the output file in $HOME/*.yaml.

    Warning

    The initial calibration process can be important before the using other calibrations. We will look into the lidar-lidar calibration and camera-lidar calibration. At this point, there is hard to calibrate two sensors with exactly same frame, so you should find approximately (it not must be perfect) calibration pairs between sensors.

    Here is the video for demonstrating a manual calibration process on tutorial_vehicle:

    "},{"location":"how-to-guides/integrating-autoware/creating-vehicle-and-sensor-model/calibrating-sensors/ground-lidar-calibration/","title":"Ground-Lidar calibration","text":""},{"location":"how-to-guides/integrating-autoware/creating-vehicle-and-sensor-model/calibrating-sensors/ground-lidar-calibration/#ground-lidar-calibration","title":"Ground-Lidar calibration","text":""},{"location":"how-to-guides/integrating-autoware/creating-vehicle-and-sensor-model/calibrating-sensors/ground-lidar-calibration/#overview","title":"Overview","text":"

    Ground-Lidar Calibration method operates under the assumption that the area surrounding the vehicle can be represented as a flat surface. So, you must find as wide and flat a surface as possible for ROS 2 bag recording. The method then modifies the calibration transformation in a way that aligns the points corresponding to the ground within the point cloud with the XY plane of the base_link. This means that only the z, roll, and pitch values of the tf undergo calibration, while the remaining x, y, and yaw values must be calibrated using other methods, such as manual adjustment or mapping-based lidar-lidar calibration.

    You need to apply this calibration method to each lidar separately, so our bag should contain all lidars to be calibrated.

    We need a sample bag file for the ground-lidar calibration process which includes raw lidar topics.

    ROS 2 Bag example of our ground-based calibration process for tutorial_vehicle
    Files:             rosbag2_2023_09_05-11_23_50_0.db3\nBag size:          3.8 GiB\nStorage id:        sqlite3\nDuration:          112.702s\nStart:             Sep  5 2023 11:23:51.105 (1693902231.105)\nEnd:               Sep  5 2023 11:25:43.808 (1693902343.808)\nMessages:          2256\nTopic information: Topic: /sensing/lidar/front/pointcloud_raw | Type: sensor_msgs/msg/PointCloud2 | Count: 1128 | Serialization Format: cdr\n                   Topic: /sensing/lidar/top/pointcloud_raw | Type: sensor_msgs/msg/PointCloud2 | Count: 1128 | Serialization Format: cdr\n
    "},{"location":"how-to-guides/integrating-autoware/creating-vehicle-and-sensor-model/calibrating-sensors/ground-lidar-calibration/#ground-lidar-calibration_1","title":"Ground-lidar calibration","text":""},{"location":"how-to-guides/integrating-autoware/creating-vehicle-and-sensor-model/calibrating-sensors/ground-lidar-calibration/#creating-launch-files","title":"Creating launch files","text":"

    We will start with creating launch file four our own vehicle like the previous sections process:

    cd <YOUR-OWN-AUTOWARE-DIRECTORY>/src/autoware/calibration_tools/sensor\ncd extrinsic_calibration_manager/launch\ncd <YOUR-OWN-SENSOR-KIT-NAME> # i.e. for our guide, it will ve cd tutorial_vehicle_sensor_kit which is created in manual calibration\ntouch ground_plane.launch.xml ground_plane_sensor_kit.launch.xml\n

    We will be modifying these ground_plane.launch.xml and ground_plane_sensor_kit.launch.xml by using TIER IV's sample sensor kit aip_x1. So, you should copy the contents of these two files from aip_x1 to your created files.

    "},{"location":"how-to-guides/integrating-autoware/creating-vehicle-and-sensor-model/calibrating-sensors/ground-lidar-calibration/#modifying-launch-files-according-to-your-sensor-kit","title":"Modifying launch files according to your sensor kit","text":"

    (Optionally) Let's start with adding vehicle_id and sensor model names: (Values are not important. These parameters will be overridden by launch arguments)

      <arg name=\"vehicle_id\" default=\"default\"/>\n\n  <let name=\"sensor_model\" value=\"aip_x1\"/>\n+ <?xml version=\"1.0\" encoding=\"UTF-8\"?>\n+ <launch>\n-   <arg name=\"vehicle_id\" default=\"default\"/>\n+   <arg name=\"vehicle_id\" default=\"<YOUR_VEHICLE_ID>\"/>\n+\n-   <arg name=\"sensor_model\" default=\"aip_x1\"/>\n+   <let name=\"sensor_model\" value=\"<YOUR_SENSOR_KIT_NAME>\"/>\n

    The final version of the file (ground_plane.launch.xml) for tutorial_vehicle should be like this:

    Sample ground_plane.launch.xml file for tutorial vehicle
    <launch>\n<arg name=\"vehicle_id\" default=\"tutorial_vehicle\"/>\n<let name=\"sensor_model\" value=\"tutorial_vehicle_sensor_kit\"/>\n\n<group>\n<push-ros-namespace namespace=\"sensor_kit\"/>\n<include file=\"$(find-pkg-share extrinsic_calibration_manager)/launch/$(var sensor_model)/ground_plane_sensor_kit.launch.xml\">\n<arg name=\"vehicle_id\" value=\"$(var vehicle_id)\"/>\n</include>\n</group>\n</launch>\n

    After the completing of ground_plane.launch.xml file, we will be ready to implement ground_plane_sensor_kit.launch.xml for the own sensor model.

    Optionally, (don't forget, these parameters will be overridden by launch arguments.) you can modify sensor_kit and vehicle_id as ground_plane.launch.xmlover this xml snippet: (You can change rviz_profile path after the saving rviz config as video which included at the end of the page)

    + <?xml version=\"1.0\" encoding=\"UTF-8\"?>\n+ <launch>\n-   <arg name=\"vehicle_id\" default=\"default\"/>\n+   <arg name=\"vehicle_id\" default=\"<YOUR_VEHICLE_ID>\"/>\n-   <arg name=\"sensor_model\" default=\"aip_x1\"/>\n+   <let name=\"sensor_model\" value=\"<YOUR_SENSOR_KIT_NAME>\"/>\n   <let name=\"base_frame\" value=\"base_link\"/>\n    <let name=\"parent_frame\" value=\"sensor_kit_base_link\"/>\n

    If you save rviz config file before for the ground-lidar calibration process:

    - <let name=\"rviz_profile\" value=\"$(find-pkg-share extrinsic_ground_plane_calibrator)/rviz/velodyne_top.rviz\"/>\n+ <let name=\"rviz_profile\" value=\"$(find-pkg-share extrinsic_ground_plane_calibrator)/rviz/<YOUR-RVIZ-CONFIG>.rviz\"/>\n

    Then, we will add all our sensor frames on extrinsic_calibration_manager as child frames:

        <!-- extrinsic_calibration_manager -->\n-   <node pkg=\"extrinsic_calibration_manager\" exec=\"extrinsic_calibration_manager\" name=\"extrinsic_calibration_manager\" output=\"screen\">\n-     <param name=\"parent_frame\" value=\"$(var parent_frame)\"/>\n-     <param name=\"child_frames\" value=\"\n-     [velodyne_top_base_link,\n-     livox_front_left_base_link,\n-     livox_front_center_base_link,\n-     livox_front_right_base_link]\"/>\n-   </node>\n+   <node pkg=\"extrinsic_calibration_manager\" exec=\"extrinsic_calibration_manager\" name=\"extrinsic_calibration_manager\" output=\"screen\">\n+     <param name=\"parent_frame\" value=\"$(var parent_frame)\"/>\n+     <!-- add your sensor frames here -->\n+     <param name=\"child_frames\" value=\"\n+     [<YOUE_SENSOR_BASE_LINK>,\n+     YOUE_SENSOR_BASE_LINK,\n+     YOUE_SENSOR_BASE_LINK,\n+     YOUE_SENSOR_BASE_LINK\n+     ...]\"/>\n+   </node>\n

    For tutorial_vehicle there are two lidar sensors (rs_helios_top and rs_bpearl_front), so it will be like this:

    i.e extrinsic_calibration_manager child_frames for tutorial_vehicle
    +   <!-- extrinsic_calibration_manager -->\n+   <node pkg=\"extrinsic_calibration_manager\" exec=\"extrinsic_calibration_manager\" name=\"extrinsic_calibration_manager\" output=\"screen\">\n+     <param name=\"parent_frame\" value=\"$(var parent_frame)\"/>\n+     <!-- add your sensor frames here -->\n+     <param name=\"child_frames\" value=\"\n+     [rs_helios_top_base_link,\n+     rs_bpearl_front_base_link]\"/>\n+   </node>\n

    After that we will add our lidar sensor configurations on ground-based calibrator, to do that we will add these lines our ground_plane_sensor_kit.launch.xml file:

    -  <group>\n-    <include file=\"$(find-pkg-share extrinsic_ground_plane_calibrator)/launch/calibrator.launch.xml\">\n-      <arg name=\"ns\" value=\"$(var parent_frame)/velodyne_top_base_link\"/>\n-      <arg name=\"base_frame\" value=\"$(var base_frame)\"/>\n-      <arg name=\"parent_frame\" value=\"$(var parent_frame)\"/>\n-      <arg name=\"child_frame\" value=\"velodyne_top_base_link\"/>\n-      <arg name=\"pointcloud_topic\" value=\"/sensing/lidar/top/pointcloud_raw\"/>\n-    </include>\n-  </group>\n+  <group>\n+   <include file=\"$(find-pkg-share extrinsic_ground_plane_calibrator)/launch/calibrator.launch.xml\">\n+     <arg name=\"ns\" value=\"$(var parent_frame)/YOUR_SENSOR_BASE_LINK\"/>\n+     <arg name=\"base_frame\" value=\"$(var base_frame)\"/>\n+     <arg name=\"parent_frame\" value=\"$(var parent_frame)\"/>\n+     <arg name=\"child_frame\" value=\"YOUR_SENSOR_BASE_LINK\"/>\n+     <arg name=\"pointcloud_topic\" value=\"<YOUR_SENSOR_TOPIC_NAME>\"/>\n+   </include>\n+ </group>\n+  ...\n+  ...\n+  ...\n+  ...\n+  ...\n+\n
    i.e., launch calibrator.launch.xml for each tutorial_vehicle's lidar
      <!-- rs_helios_top_base_link: extrinsic_ground_plane_calibrator -->\n<group>\n<include file=\"$(find-pkg-share extrinsic_ground_plane_calibrator)/launch/calibrator.launch.xml\">\n<arg name=\"ns\" value=\"$(var parent_frame)/rs_helios_top_base_link\"/>\n<arg name=\"base_frame\" value=\"$(var base_frame)\"/>\n<arg name=\"parent_frame\" value=\"$(var parent_frame)\"/>\n<arg name=\"child_frame\" value=\"rs_helios_top_base_link\"/>\n<arg name=\"pointcloud_topic\" value=\"/sensing/lidar/top/pointcloud_raw\"/>\n</include>\n</group>\n\n<!-- rs_bpearl_front_base_link: extrinsic_ground_plane_calibrator -->\n<group>\n<include file=\"$(find-pkg-share extrinsic_ground_plane_calibrator)/launch/calibrator.launch.xml\">\n<arg name=\"ns\" value=\"$(var parent_frame)/rs_bpearl_front_base_link\"/>\n<arg name=\"base_frame\" value=\"$(var base_frame)\"/>\n<arg name=\"parent_frame\" value=\"$(var parent_frame)\"/>\n<arg name=\"child_frame\" value=\"rs_bpearl_front_base_link\"/>\n<arg name=\"pointcloud_topic\" value=\"/sensing/lidar/front/pointcloud_raw\"/>\n</include>\n</group>\n\n<node pkg=\"rviz2\" exec=\"rviz2\" name=\"rviz2\" output=\"screen\" args=\"-d $(var rviz_profile)\" if=\"$(var calibration_rviz)\"/>\n</launch>\n

    The ground_plane_sensor_kit.launch.xml launch file for tutorial_vehicle should be this:

    Sample ground_plane_sensor_kit.launch.xml for tutorial_vehicle
    <?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<launch>\n<arg name=\"vehicle_id\" default=\"tutorial_vehicle\"/>\n<let name=\"sensor_model\" value=\"tutorial_vehicle_sensor_kit\"/>\n<let name=\"base_frame\" value=\"base_link\"/>\n<let name=\"parent_frame\" value=\"sensor_kit_base_link\"/>\n<let name=\"rviz_profile\" value=\"$(find-pkg-share extrinsic_ground_plane_calibrator)/rviz/velodyne_top.rviz\"/>\n<arg name=\"calibration_rviz\" default=\"true\"/>\n\n<!-- extrinsic_calibration_client -->\n<arg name=\"src_yaml\" default=\"$(find-pkg-share individual_params)/config/$(var vehicle_id)/$(var sensor_model)/sensor_kit_calibration.yaml\"/>\n<arg name=\"dst_yaml\" default=\"$(env HOME)/sensor_kit_calibration.yaml\"/>\n\n<node pkg=\"extrinsic_calibration_client\" exec=\"extrinsic_calibration_client\" name=\"extrinsic_calibration_client\" output=\"screen\">\n<param name=\"src_path\" value=\"$(var src_yaml)\"/>\n<param name=\"dst_path\" value=\"$(var dst_yaml)\"/>\n</node>\n\n<!-- extrinsic_calibration_manager -->\n<node pkg=\"extrinsic_calibration_manager\" exec=\"extrinsic_calibration_manager\" name=\"extrinsic_calibration_manager\" output=\"screen\">\n<param name=\"parent_frame\" value=\"$(var parent_frame)\"/>\n<param name=\"child_frames\" value=\"\n    [rs_helios_top_base_link,\n    rs_bpearl_front_base_link]\"/>\n</node>\n\n<!-- rs_helios_top_base_link: extrinsic_ground_plane_calibrator -->\n<group>\n<include file=\"$(find-pkg-share extrinsic_ground_plane_calibrator)/launch/calibrator.launch.xml\">\n<arg name=\"ns\" value=\"$(var parent_frame)/rs_helios_top_base_link\"/>\n<arg name=\"base_frame\" value=\"$(var base_frame)\"/>\n<arg name=\"parent_frame\" value=\"$(var parent_frame)\"/>\n<arg name=\"child_frame\" value=\"rs_helios_top_base_link\"/>\n<arg name=\"pointcloud_topic\" value=\"/sensing/lidar/top/pointcloud_raw\"/>\n</include>\n</group>\n\n<!-- rs_bpearl_front_base_link: extrinsic_ground_plane_calibrator -->\n<group>\n<include file=\"$(find-pkg-share extrinsic_ground_plane_calibrator)/launch/calibrator.launch.xml\">\n<arg name=\"ns\" value=\"$(var parent_frame)/rs_bpearl_front_base_link\"/>\n<arg name=\"base_frame\" value=\"$(var base_frame)\"/>\n<arg name=\"parent_frame\" value=\"$(var parent_frame)\"/>\n<arg name=\"child_frame\" value=\"rs_bpearl_front_base_link\"/>\n<arg name=\"pointcloud_topic\" value=\"/sensing/lidar/front/pointcloud_raw\"/>\n</include>\n</group>\n\n<node pkg=\"rviz2\" exec=\"rviz2\" name=\"rviz2\" output=\"screen\" args=\"-d $(var rviz_profile)\" if=\"$(var calibration_rviz)\"/>\n</launch>\n
    "},{"location":"how-to-guides/integrating-autoware/creating-vehicle-and-sensor-model/calibrating-sensors/ground-lidar-calibration/#ground-plane-lidar-calibration-process-with-extrinsic-ground-plane-calibrator","title":"Ground plane-lidar calibration process with extrinsic ground-plane calibrator","text":"

    After completing mapping_based.launch.xml and mapping_based_sensor_kit.launch.xml launch files for own sensor kit; now we are ready to calibrate our lidars. First of all, we need to build extrinsic_calibration_manager package:

    colcon build --symlink-install --cmake-args -DCMAKE_BUILD_TYPE=Release --packages-select extrinsic_calibration_manager\n

    So, we are ready to launch and use ground-based lidar-ground calibrator.

    ros2 launch extrinsic_calibration_manager calibration.launch.xml mode:=ground_plane sensor_model:=<OWN-SENSOR-KIT> vehicle_model:=<OWN-VEHICLE-MODEL> vehicle_id:=<VEHICLE-ID>\n

    For tutorial vehicle:

    ros2 launch extrinsic_calibration_manager calibration.launch.xml mode:=ground_plane sensor_model:=tutorial_vehicle_sensor_kit vehicle_model:=tutorial_vehicle vehicle_id:=tutorial_vehicle\n

    You will show the rviz2 screen with several configurations, you need to update it with your sensor information topics, sensor_frames and pointcloud_inlier_topics like the video, which included an end of the document. Also, you can save the rviz2 config on rviz directory, so you can use it later with modifying mapping_based_sensor_kit.launch.xml.

    extrinsic_mapping_based_calibrator/\n   \u2514\u2500 rviz/\n+        \u2514\u2500 tutorial_vehicle_sensor_kit.rviz\n

    Then play ROS 2 bag file, the calibration process will be started:

    ros2 bag play <rosbag_path> --clock -l -r 0.2 \\\n--remap /tf:=/null/tf /tf_static:=/null/tf_static # if tf is recorded\n

    Since the calibration process is done automatically, you can see the sensor_kit_calibration.yaml in your $HOME directory after the calibration process is complete.

    Before Ground Plane - Lidar Calibration After Ground Plane - Lidar Calibration

    Here is the video for demonstrating the ground plane - lidar calibration process on tutorial_vehicle:

    "},{"location":"how-to-guides/integrating-autoware/creating-vehicle-and-sensor-model/calibrating-sensors/intrinsic-camera-calibration/","title":"Intrinsic camera calibration","text":""},{"location":"how-to-guides/integrating-autoware/creating-vehicle-and-sensor-model/calibrating-sensors/intrinsic-camera-calibration/#intrinsic-camera-calibration","title":"Intrinsic camera calibration","text":""},{"location":"how-to-guides/integrating-autoware/creating-vehicle-and-sensor-model/calibrating-sensors/intrinsic-camera-calibration/#overview","title":"Overview","text":"

    Intrinsic camera calibration is the process of determining the internal parameters of a camera which will be used when projecting 3D information into images. These parameters include focal length, optical center, and lens distortion coefficients. In order to perform camera Intrinsic calibration, we will use TIER IV's Intrinsic Camera Calibrator tool. First of all, we need a calibration board which can be dot, chess or apriltag grid board. In this tutorial, we will use this 7x7 chess board consisting of 7 cm squares:

    Our 7x7 calibration chess board for this tutorial section.

    Here are some calibration board samples from Intrinsic Camera Calibrator page:

    • Chess boards (6x8 example)
    • Circle dot boards (6x8 example)
    • Apriltag grid board (3x4 example)

    If you want to use bag file for a calibration process, the bag file must include image_raw topic of your camera sensor, but you can perform calibration with real time. (recommended)

    ROS 2 Bag example for intrinsic camera calibration process
    Files:             rosbag2_2023_09_18-16_19_08_0.db3\nBag size:          12.0 GiB\nStorage id:        sqlite3\nDuration:          135.968s\nStart:             Sep 18 2023 16:19:08.966 (1695043148.966)\nEnd:               Sep 18 2023 16:21:24.934 (1695043284.934)\nMessages:          4122\nTopic information: Topic: /sensing/camera/camera0/image_raw | Type: sensor_msgs/msg/Image | Count: 2061 | Serialization Format: cdr\n
    "},{"location":"how-to-guides/integrating-autoware/creating-vehicle-and-sensor-model/calibrating-sensors/intrinsic-camera-calibration/#intrinsic-camera-calibration_1","title":"Intrinsic camera calibration","text":"

    Unlike other calibration packages in our tutorials, this package does not need to create an initialization file. So we can start with launching intrinsic calibrator package.

    cd <YOUR-OWN-AUTOWARE-DIRECTORY>\nsource install/setup.bash\n

    After that, we will launch intrinsic calibrator:

    ros2 launch intrinsic_camera_calibrator calibrator.launch.xml\n

    Then, initial configuration and camera intrinsic calibration panels will show up. We set initial configurations for our calibration.

    Initial configuration panel

    We set our image source (it can be \"ROS topic\", \"ROS bag\" or \"Image files\") from the source options section. We will calibrate our camera with \"ROS topic\" source. After selecting an image source from this panel, we need to configure \"Board options\" as well. The calibration board can be Chess board, Dot board or Apriltag. Also, we need to select board parameters, to do that, click the \"Board parameters\" button and set row, column, and cell size.

    After the setting of image source and board parameters, we are ready for the calibration process. Please click the start button, you will see Topic configuration panel. Please select the appropriate camera raw topic for the calibration process.

    Topic configuration panel

    Then you are ready to calibration. Please collect data with different X-Y axis, sizes and skews. You can see your collected data statistics with the clicking view data collection statistics. For more information, please refer to Intrinsic Camera Calibrator page.

    Intrinsic calibrator interface

    After the data collection is completed, you can click the \"Calibrate\" button to perform the calibration process. After the calibration is completed, you will see data visualization for the calibration result statistics. You can observe your calibration results with changing \"Image view type \"Source unrectified\" to \"Source rectified\". If your calibration is successful (there should be no distortion in the rectified image), you can save your calibration results with \"Save\" button. The output will be named as <YOUR-CAMERA-NAME>_info.yaml. So, you use this file with your camera driver directly.

    Here is the video for demonstrating the intrinsic camera calibration process on tutorial_vehicle:

    "},{"location":"how-to-guides/integrating-autoware/creating-vehicle-and-sensor-model/calibrating-sensors/lidar-camera-calibration/","title":"Lidar-Camera calibration","text":""},{"location":"how-to-guides/integrating-autoware/creating-vehicle-and-sensor-model/calibrating-sensors/lidar-camera-calibration/#lidar-camera-calibration","title":"Lidar-Camera calibration","text":""},{"location":"how-to-guides/integrating-autoware/creating-vehicle-and-sensor-model/calibrating-sensors/lidar-camera-calibration/#overview","title":"Overview","text":"

    Lidar-camera calibration is a crucial process in the field of autonomous driving and robotics, where both lidar sensors and cameras are used for perception. The goal of calibration is to accurately align the data from these different sensors in order to create a comprehensive and coherent representation of the environment by projecting lidar point onto camera image. At this tutorial, we will explain TIER IV's interactive camera calibrator. Also, If you have aruco marker boards for calibration, another Lidar-Camera calibration method is included in TIER IV's CalibrationTools repository.

    Warning

    You need to apply intrinsic calibration before starting lidar-camera extrinsic calibration process. Also, please obtain the initial calibration results from the Manual Calibration section. This is crucial for obtaining accurate results from this tool. We will utilize the initial calibration parameters that were calculated in the previous step of this tutorial. To apply these initial values in the calibration tools, please update your sensor calibration files within the individual parameter package.

    Your bag file must include calibration lidar topic and camera topics. Camera topics can be compressed or raw topics, but remember we will update interactive calibrator launch argument use_compressed according to the topic type.

    ROS 2 Bag example of our calibration process (there is only one camera mounted) If you have multiple cameras, please add camera_info and image topics as well.
    Files:             rosbag2_2023_09_12-13_57_03_0.db3\nBag size:          5.8 GiB\nStorage id:        sqlite3\nDuration:          51.419s\nStart:             Sep 12 2023 13:57:03.691 (1694516223.691)\nEnd:               Sep 12 2023 13:57:55.110 (1694516275.110)\nMessages:          2590\nTopic information: Topic: /sensing/lidar/top/pointcloud_raw | Type: sensor_msgs/msg/PointCloud2 | Count: 515 | Serialization Format: cdr\nTopic: /sensing/camera/camera0/image_raw | Type: sensor_msgs/msg/Image | Count: 780 | Serialization Format: cdr\nTopic: /sensing/camera/camera0/camera_info | Type: sensor_msgs/msg/CameraInfo | Count: 780 | Serialization Format: cdr\n
    "},{"location":"how-to-guides/integrating-autoware/creating-vehicle-and-sensor-model/calibrating-sensors/lidar-camera-calibration/#lidar-camera-calibration_1","title":"Lidar-Camera calibration","text":""},{"location":"how-to-guides/integrating-autoware/creating-vehicle-and-sensor-model/calibrating-sensors/lidar-camera-calibration/#creating-launch-files","title":"Creating launch files","text":"

    We start with creating launch file four our vehicle like \"Extrinsic Manual Calibration\" process:

    cd <YOUR-OWN-AUTOWARE-DIRECTORY>/src/autoware/calibration_tools/sensor\ncd extrinsic_calibration_manager/launch\ncd <YOUR-OWN-SENSOR-KIT-NAME> # i.e. for our guide, it will ve cd tutorial_vehicle_sensor_kit which is created in manual calibration\ntouch interactive.launch.xml interactive_sensor_kit.launch.xml\n
    "},{"location":"how-to-guides/integrating-autoware/creating-vehicle-and-sensor-model/calibrating-sensors/lidar-camera-calibration/#modifying-launch-files-according-to-your-sensor-kit","title":"Modifying launch files according to your sensor kit","text":"

    We will be modifying these interactive.launch.xml and interactive_sensor_kit.launch.xml by using TIER IV's sample sensor kit aip_xx1. So, you should copy the contents of these two files from aip_xx1 to your created files.

    Then we will continue with adding vehicle_id and sensor model names to the interactive.launch.xml. (Optionally, values are not important. These parameters will be overridden by launch arguments)

      <arg name=\"vehicle_id\" default=\"default\"/>\n\n  <let name=\"sensor_model\" value=\"aip_x1\"/>\n  <?xml version=\"1.0\" encoding=\"UTF-8\"?>\n  <launch>\n-   <arg name=\"vehicle_id\" default=\"default\"/>\n+   <arg name=\"vehicle_id\" default=\"<YOUR_VEHICLE_ID>\"/>\n+\n-   <arg name=\"sensor_model\" default=\"aip_x1\"/>\n+   <let name=\"sensor_model\" value=\"<YOUR_SENSOR_KIT_NAME>\"/>\n

    If you want to use concatenated pointcloud as an input cloud (the calibration process will initiate the logging simulator, resulting in the construction of the lidar pipeline and the appearance of the concatenated point cloud), you must set use_concatenated_pointcloud value as true.

    -   <arg name=\"use_concatenated_pointcloud\" default=\"false\"/>\n+   <arg name=\"use_concatenated_pointcloud\" default=\"true\"/>\n

    The final version of the file (interactive.launch.xml) for tutorial_vehicle should be like this:

    Sample interactive.launch.xml file for tutorial vehicle
    <?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<launch>\n<arg name=\"vehicle_id\" default=\"tutorial_vehicle\"/>\n<let name=\"sensor_model\" value=\"tutorial_vehicle_sensor_kit\"/>\n<arg name=\"camera_name\"/>\n<arg name=\"rviz\" default=\"false\"/>\n<arg name=\"use_concatenated_pointcloud\" default=\"true\"/>\n\n<group>\n<push-ros-namespace namespace=\"sensor_kit\"/>\n<include file=\"$(find-pkg-share extrinsic_calibration_manager)/launch/$(var sensor_model)/interactive_sensor_kit.launch.xml\" if=\"$(var rviz)\">\n<arg name=\"vehicle_id\" value=\"$(var vehicle_id)\"/>\n<arg name=\"camera_name\" value=\"$(var camera_name)\"/>\n</include>\n</group>\n\n<!-- You can change the config file path -->\n<node pkg=\"rviz2\" exec=\"rviz2\" name=\"rviz2\" output=\"screen\" args=\"\n-d $(find-pkg-share extrinsic_calibration_manager)/config/x2/extrinsic_interactive_calibrator.rviz\" if=\"$(var rviz)\"/>\n</launch>\n

    After the completing of interactive.launch.xml file, we will be ready to implement interactive_sensor_kit.launch.xml for the own sensor model.

    Optionally, (don't forget, these parameters will be overridden by launch arguments) you can modify sensor_kit and vehicle_id as interactive.launch.xmlover this xml snippet. We will set parent_frame for calibration as `sensor_kit_base_link``:

    The default camera input topic of interactive calibrator is compressed image. If you want to use raw image instead of compressed image, you need to update image_topic variable for your camera sensor topic.

        ...\n-   <let name=\"image_topic\" value=\"/sensing/camera/$(var camera_name)/image_raw\"/>\n+   <let name=\"image_topic\" value=\"/sensing/camera/$(var camera_name)/image_compressed\"/>\n   ...\n

    After updating your topic name, you need to add the use_compressed parameter (default value is true) to the interactive_calibrator node with a value of false.

       ...\n   <node pkg=\"extrinsic_interactive_calibrator\" exec=\"interactive_calibrator\" name=\"interactive_calibrator\" output=\"screen\">\n     <remap from=\"pointcloud\" to=\"$(var pointcloud_topic)\"/>\n     <remap from=\"image\" to=\"$(var image_compressed_topic)\"/>\n     <remap from=\"camera_info\" to=\"$(var camera_info_topic)\"/>\n     <remap from=\"calibration_points_input\" to=\"calibration_points\"/>\n+    <param name=\"use_compressed\" value=\"false\"/>\n  ...\n

    Then you can customize pointcloud topic for each camera. For example, if you want to calibrate camera_1 with left lidar, then you should change launch file like this:

        <let name=\"pointcloud_topic\" value=\"/sensing/lidar/top/pointcloud_raw\" if=\"$(eval &quot;'$(var camera_name)' == 'camera0' &quot;)\"/>\n-   <let name=\"pointcloud_topic\" value=\"/sensing/lidar/top/pointcloud_raw\" if=\"$(eval &quot;'$(var camera_name)' == 'camera1' &quot;)\"/>\n+   <let name=\"pointcloud_topic\" value=\"/sensing/lidar/left/pointcloud_raw\" if=\"$(eval &quot;'$(var camera_name)' == 'camera1' &quot;)\"/>\n   <let name=\"pointcloud_topic\" value=\"/sensing/lidar/top/pointcloud_raw\" if=\"$(eval &quot;'$(var camera_name)' == 'camera2' &quot;)\"/>\n    <let name=\"pointcloud_topic\" value=\"/sensing/lidar/top/pointcloud_raw\" if=\"$(eval &quot;'$(var camera_name)' == 'camera3' &quot;)\"/>\n    <let name=\"pointcloud_topic\" value=\"/sensing/lidar/top/pointcloud_raw\" if=\"$(eval &quot;'$(var camera_name)' == 'camera4' &quot;)\"/>\n    <let name=\"pointcloud_topic\" value=\"/sensing/lidar/top/pointcloud_raw\" if=\"$(eval &quot;'$(var camera_name)' == 'camera5' &quot;)\"/>\n    ...\n

    The interactive_sensor_kit.launch.xml launch file for tutorial_vehicle should be this:

    i.e. interactive_sensor_kit.launch.xml for tutorial_vehicle
    <?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<launch>\n<arg name=\"vehicle_id\" default=\"tutorial_vehicle\"/>\n<let name=\"sensor_model\" value=\"tutorial_vehicle_sensor_kit\"/>\n<let name=\"parent_frame\" value=\"sensor_kit_base_link\"/>\n\n<!-- extrinsic_calibration_client -->\n<arg name=\"src_yaml\" default=\"$(find-pkg-share individual_params)/config/$(var vehicle_id)/$(var sensor_model)/sensor_kit_calibration.yaml\"/>\n<arg name=\"dst_yaml\" default=\"$(env HOME)/sensor_kit_calibration.yaml\"/>\n\n\n<arg name=\"camera_name\"/>\n\n<let name=\"image_topic\" value=\"/sensing/camera/$(var camera_name)/image_raw\"/>\n<let name=\"image_topic\" value=\"/sensing/camera/traffic_light/image_raw\" if=\"$(eval &quot;'$(var camera_name)' == 'traffic_light_left_camera' &quot;)\"/>\n\n<let name=\"use_compressed\" value=\"false\"/>\n\n<let name=\"image_compressed_topic\" value=\"/sensing/camera/$(var camera_name)/image_raw/compressed\"/>\n<let name=\"image_compressed_topic\" value=\"/sensing/camera/traffic_light/image_raw/compressed\" if=\"$(eval &quot;'$(var camera_name)' == 'traffic_light_left_camera' &quot;)\"/>\n\n<let name=\"camera_info_topic\" value=\"/sensing/camera/$(var camera_name)/camera_info\"/>\n<let name=\"camera_info_topic\" value=\"/sensing/camera/traffic_light/camera_info\" if=\"$(eval &quot;'$(var camera_name)' == 'traffic_light_left_camera' &quot;)\"/>\n\n<let name=\"pointcloud_topic\" value=\"/sensing/lidar/top/pointcloud_raw\" if=\"$(eval &quot;'$(var camera_name)' == 'camera0' &quot;)\"/>\n<let name=\"pointcloud_topic\" value=\"/sensing/lidar/top/pointcloud_raw\" if=\"$(eval &quot;'$(var camera_name)' == 'camera1' &quot;)\"/>\n<let name=\"pointcloud_topic\" value=\"/sensing/lidar/top/pointcloud_raw\" if=\"$(eval &quot;'$(var camera_name)' == 'camera2' &quot;)\"/>\n<let name=\"pointcloud_topic\" value=\"/sensing/lidar/top/pointcloud_raw\" if=\"$(eval &quot;'$(var camera_name)' == 'camera3' &quot;)\"/>\n<let name=\"pointcloud_topic\" value=\"/sensing/lidar/top/pointcloud_raw\" if=\"$(eval &quot;'$(var camera_name)' == 'camera4' &quot;)\"/>\n<let name=\"pointcloud_topic\" value=\"/sensing/lidar/top/pointcloud_raw\" if=\"$(eval &quot;'$(var camera_name)' == 'camera5' &quot;)\"/>\n<let name=\"pointcloud_topic\" value=\"/sensing/lidar/top/pointcloud_raw\" if=\"$(eval &quot;'$(var camera_name)' == 'traffic_light_left_camera' &quot;)\"/>\n\n<let name=\"calibrate_sensor\" value=\"false\"/>\n<let name=\"calibrate_sensor\" value=\"true\" if=\"$(eval &quot;'$(var camera_name)' == 'camera0' &quot;)\"/>\n<let name=\"calibrate_sensor\" value=\"true\" if=\"$(eval &quot;'$(var camera_name)' == 'camera1' &quot;)\"/>\n<let name=\"calibrate_sensor\" value=\"true\" if=\"$(eval &quot;'$(var camera_name)' == 'camera2' &quot;)\"/>\n<let name=\"calibrate_sensor\" value=\"true\" if=\"$(eval &quot;'$(var camera_name)' == 'camera3' &quot;)\"/>\n<let name=\"calibrate_sensor\" value=\"true\" if=\"$(eval &quot;'$(var camera_name)' == 'camera4' &quot;)\"/>\n<let name=\"calibrate_sensor\" value=\"true\" if=\"$(eval &quot;'$(var camera_name)' == 'camera5' &quot;)\"/>\n<let name=\"calibrate_sensor\" value=\"true\" if=\"$(eval &quot;'$(var camera_name)' == 'traffic_light_left_camera' &quot;)\"/>\n\n<let name=\"camera_frame\" value=\"\"/>\n<let name=\"camera_frame\" value=\"camera0/camera_link\" if=\"$(eval &quot;'$(var camera_name)' == 'camera0' &quot;)\"/>\n<let name=\"camera_frame\" value=\"camera1/camera_link\" if=\"$(eval &quot;'$(var camera_name)' == 'camera1' &quot;)\"/>\n<let name=\"camera_frame\" value=\"camera2/camera_link\" if=\"$(eval &quot;'$(var camera_name)' == 'camera2' &quot;)\"/>\n<let name=\"camera_frame\" value=\"camera3/camera_link\" if=\"$(eval &quot;'$(var camera_name)' == 'camera3' &quot;)\"/>\n<let name=\"camera_frame\" value=\"camera4/camera_link\" if=\"$(eval &quot;'$(var camera_name)' == 'camera4' &quot;)\"/>\n<let name=\"camera_frame\" value=\"camera5/camera_link\" if=\"$(eval &quot;'$(var camera_name)' == 'camera5' &quot;)\"/>\n<let name=\"camera_frame\" value=\"traffic_light_left_camera/camera_link\" if=\"$(eval &quot;'$(var camera_name)' == 'traffic_light_left_camera' &quot;)\"/>\n\n<node pkg=\"extrinsic_calibration_client\" exec=\"extrinsic_calibration_client\" name=\"extrinsic_calibration_client\" output=\"screen\" if=\"$(var calibrate_sensor)\">\n<param name=\"src_path\" value=\"$(var src_yaml)\"/>\n<param name=\"dst_path\" value=\"$(var dst_yaml)\"/>\n</node>\n\n<!-- extrinsic_calibration_manager -->\n<node pkg=\"extrinsic_calibration_manager\" exec=\"extrinsic_calibration_manager\" name=\"extrinsic_calibration_manager\" output=\"screen\" if=\"$(var calibrate_sensor)\">\n<param name=\"parent_frame\" value=\"$(var parent_frame)\"/>\n<param name=\"child_frames\" value=\"\n    [$(var camera_frame)]\"/>\n</node>\n\n<!-- interactive calibrator -->\n<group if=\"$(var calibrate_sensor)\">\n<push-ros-namespace namespace=\"$(var parent_frame)/$(var camera_frame)\"/>\n\n<node pkg=\"extrinsic_interactive_calibrator\" exec=\"interactive_calibrator\" name=\"interactive_calibrator\" output=\"screen\">\n<remap from=\"pointcloud\" to=\"$(var pointcloud_topic)\"/>\n<remap from=\"image\" to=\"$(var image_topic)\"/>\n<remap from=\"camera_info\" to=\"$(var camera_info_topic)\"/>\n<remap from=\"calibration_points_input\" to=\"calibration_points\"/>\n\n<param name=\"camera_parent_frame\" value=\"$(var parent_frame)\"/>\n<param name=\"camera_frame\" value=\"$(var camera_frame)\"/>\n<param name=\"use_compressed\" value=\"$(var use_compressed)\"/>\n</node>\n\n<include file=\"$(find-pkg-share intrinsic_camera_calibration)/launch/optimizer.launch.xml\"/>\n</group>\n</launch>\n
    "},{"location":"how-to-guides/integrating-autoware/creating-vehicle-and-sensor-model/calibrating-sensors/lidar-camera-calibration/#lidar-camera-calibration-process-with-interactive-camera-lidar-calibrator","title":"Lidar-camera calibration process with interactive camera-lidar calibrator","text":"

    After completing interactive.launch.xml and interactive_sensor_kit.launch.xml launch files for own sensor kit; now we are ready to calibrate our lidars. First of all, we need to build extrinsic_calibration_manager package:

    colcon build --symlink-install --cmake-args -DCMAKE_BUILD_TYPE=Release --packages-select extrinsic_calibration_manager\n

    So, we are ready to launch and use interactive lidar-camera calibrator.

    ros2 launch extrinsic_calibration_manager calibration.launch.xml mode:=interactive sensor_model:=<OWN-SENSOR-KIT> vehicle_model:=<OWN-VEHICLE-MODEL> vehicle_id:=<VEHICLE-ID> camera_name:=<CALIBRATION-CAMERA>\n

    For tutorial vehicle:

    ros2 launch extrinsic_calibration_manager calibration.launch.xml mode:=interactive sensor_model:=tutorial_vehicle_sensor_kit vehicle_model:=tutorial_vehicle vehicle_id:=tutorial_vehicle\n

    Then, we need to play our bag file.

    ros2 bag play <rosbag_path> --clock -l -r 0.2 \\\n--remap /tf:=/null/tf /tf_static:=/null/tf_static # if tf is recorded\n

    You will be shown a manual interactive calibrator rqt window and Rviz2. You must add your lidar sensor point cloud to Rviz2, then we can publish points for the calibrator.

    • After that, Let's start by pressing the Publish Point button and selecting points on the point cloud that are also included in the projected image. Then, you need to click on the image point that corresponds to the projected lidar point on the image. You will see matched calibration points.

    • The red points indicate selected lidar points and green ones indicate selected image points. You must match the minimum 6 points to perform calibration. If you have a wrong match, you can remove this match by just clicking on them. After selecting points on image and lidar, you are ready to calibrate. If selected point match size is greater than 6, \"Calibrate extrinsic\" button will be enabled. Click this button and change tf source Initial /tf to Calibrator to see calibration results.

    After the completion of the calibration, you need to save your calibration results via \"Save calibration\" button. The saved format is json, so you need to update calibration params at sensor_kit_calibration.yaml on individual_params and sensor_kit_description packages.

    Sample calibration output
    {\n\"header\": {\n\"stamp\": {\n\"sec\": 1694776487,\n\"nanosec\": 423288443\n},\n\"frame_id\": \"sensor_kit_base_link\"\n},\n\"child_frame_id\": \"camera0/camera_link\",\n\"transform\": {\n\"translation\": {\n\"x\": 0.054564283153017916,\n\"y\": 0.040947512210503106,\n\"z\": -0.071735410952332\n},\n\"rotation\": {\n\"x\": -0.49984112274024817,\n\"y\": 0.4905405357176159,\n\"z\": -0.5086269994990131,\n\"w\": 0.5008267267391722\n}\n},\n\"roll\": -1.5517347113946862,\n\"pitch\": -0.01711459479043047,\n\"yaw\": -1.5694590141484235\n}\n

    Here is the video for demonstrating the lidar-camera calibration process on tutorial_vehicle:

    "},{"location":"how-to-guides/integrating-autoware/creating-vehicle-and-sensor-model/calibrating-sensors/lidar-imu-calibration/","title":"Lidar-Imu Calibration","text":""},{"location":"how-to-guides/integrating-autoware/creating-vehicle-and-sensor-model/calibrating-sensors/lidar-imu-calibration/#lidar-imu-calibration","title":"Lidar-Imu Calibration","text":""},{"location":"how-to-guides/integrating-autoware/creating-vehicle-and-sensor-model/calibrating-sensors/lidar-imu-calibration/#overview","title":"Overview","text":"

    Lidar-Imu calibration is important for localization and mapping algorithms which used in autonomous driving. In this tutorial, we will calibrate the lidar and imu sensors with using OA-LICalib tool which is developed by APRIL Lab at Zhejiang University in China.

    OA-LICalib is calibration method for the LiDAR-Inertial systems within a continuous-time batch optimization, where intrinsic of both sensors, the time offset between sensors and the spatial-temporal extrinsic between sensors are calibrated comprehensively without explicit hand-crafted targets.

    Warning

    This calibration tool is developed with ROS 1, and it is not compatible with ROS 2. So, we are providing a docker image which has ROS 1 and all necessary packages. In the calibration instructions, we will ask you to install docker on your system.

    ROS 2 Bag example of our calibration process for tutorial_vehicle
    Files:             rosbag2_2023_08_18-14_42_12_0.db3\nBag size:          12.4 GiB\nStorage id:        sqlite3\nDuration:          202.140s\nStart:             Aug 18 2023 14:42:12.586 (1692358932.586)\nEnd:               Aug 18 2023 14:45:34.727 (1692359134.727)\nMessages:          22237\nTopic information: Topic: /sensing/gnss/sbg/ros/imu/data | Type: sensor_msgs/msg/Imu | Count: 20215 | Serialization Format: cdr\n                   Topic: /sensing/lidar/top/pointcloud_raw | Type: sensor_msgs/msg/PointCloud2 | Count: 2022 | Serialization Format: cdr\n
    "},{"location":"how-to-guides/integrating-autoware/creating-vehicle-and-sensor-model/calibrating-sensors/lidar-imu-calibration/#data-collection-and-preparation","title":"Data Collection and Preparation","text":"

    For Lidar-Imu calibration, there is a need for a ROS 1 bag file which contains sensor_msgs/PointCloud2 and sensor_msgs/Imu messages. To obtain good results as a result of the calibration process, you need to move the sensors in all 6 axes (x, y, z, roll, pitch, yaw) while collecting data. Therefore, holding the sensors in your hand while data collection will get better results, but you can also collect data on the vehicle. If you are collecting data on the vehicle, you should draw figures of eights and grids.

    Lidar - IMU Calibration Data Collection

    Moreover, the calibration accuracy is affected by the data collection environment. You should collect your data in a place that contains a lot of flat surfaces, and indoor spaces are the best locations under these conditions. However, you can also achieve good results outdoors. When collecting data, make sure to draw figures of eights and grids, capturing data from every angle.

    "},{"location":"how-to-guides/integrating-autoware/creating-vehicle-and-sensor-model/calibrating-sensors/lidar-imu-calibration/#converting-ros-2-bag-to-ros-1-bag","title":"Converting ROS 2 Bag to ROS 1 Bag","text":"

    If you collected your calibration data in ROS 2, you can convert it to ROS 1 bag file with the following instructions:

    • Split your ROS 2 bag file if it contains non-standard message topics (you can only select sensor_msgs/PointCloud2 and sensor_msgs/Imu messages), and convert your split ROS 2 bag file to ROS 1 bag.

    Create a yaml file with name out.yaml which contains your lidar and imu topics:

    output_bags:\n- uri: splitted_bag\ntopics: [/your/imu/topic, /your/pointcloud/topic]\n

    Split your ROS 2 bag file:

    ros2 bag convert -i <YOUR-ROS2-BAG-FOLDER> -o out.yaml\n

    Convert your split ROS 2 bag file to ROS 1 bag file:

    # install bag converter tool (https://gitlab.com/ternaris/rosbags)\npip3 install rosbags\n\n# convert bag\nrosbags-convert <YOUR-SPLITTED-ROS2-BAG-FOLDER> --dst <OUTPUT-BAG-FILE>\n
    "},{"location":"how-to-guides/integrating-autoware/creating-vehicle-and-sensor-model/calibrating-sensors/lidar-imu-calibration/#lidar-imu-calibration_1","title":"Lidar-Imu Calibration","text":"

    As a first step, we need to install docker on our system. You can install docker using this link, or you can use the following commands to install docker using the Apt repository.

    Set up Docker's Apt repository:

    # Add Docker's official GPG key:\nsudo apt-get update\nsudo apt-get install ca-certificates curl gnupg\nsudo install -m 0755 -d /etc/apt/keyrings\ncurl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /etc/apt/keyrings/docker.gpg\nsudo chmod a+r /etc/apt/keyrings/docker.gpg\n\n# Add the repository to Apt sources:\necho \\\n\"deb [arch=\"$(dpkg --print-architecture)\" signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/ubuntu \\\n  \"$(. /etc/os-release && echo \"$VERSION_CODENAME\")\" stable\" | \\\nsudo tee /etc/apt/sources.list.d/docker.list > /dev/null\nsudo apt-get update\n

    Install the Docker packages:

    sudo apt-get install docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin\n

    To check if docker is installed correctly, you can run the following command:

    sudo docker run hello-world\n

    Before finishing the installation, we need to add our user to the docker group. This will allow us to run docker commands without sudo:

    sudo groupadd docker\nsudo usermod -aG docker $USER\n

    Warning

    After running the above command, you need to logout and login again to be able to run docker commands without sudo.

    After installing docker, we are ready to run the calibration tool. As a first step, you should clone the calibration repository:

    git clone https://github.com/leo-drive/OA-LICalib.git\n

    Then, you need to build the docker image:

    cd OA-LICalib/docker\nsudo docker build -t oalicalib .\n

    After building the docker image, you need to create a container from the image:

    Warning

    You need to update REPO_PATH with the path to the cloned repository on your system.

    export REPO_PATH=\"/path/to/OA-LICalib\"\ndocker run -it --env=\"DISPLAY\" --volume=\"$HOME/.Xauthority:/root/.Xauthority:rw\" --volume=\"/tmp/.X11-unix:/tmp/.X11-unix:rw\" --volume=\"$REPO_PATH:/root/catkin_oa_calib/src/OA-LICalib\" oalicalib bash\n

    Before running the calibration tool, you should change some parameters from the configuration file. You can find the configuration file in the OA-LICalib/config

    Change the following parameters in the configuration file as your topics and sensors:

    • These are the lidar model options: VLP_16_packet, VLP_16_points, VLP_32E_points, VLS_128_points, Ouster_16_points, Ouster_32_points, Ouster_64_points, Ouster_128_points, RS_16
    • start_time and end_time are the interval of the rosbag that you want to use
    • path_bag is the path to the rosbag file, but you need to give the path inside the container, not your local system. For example, if you have a rosbag file in the OA-LICalib/data directory, you need to give the path as /root/calib_ws/src/OA-LICalib/data/rosbag2_2023_08_18-14_42_12_0.bag
    topic_lidar: /sensing/lidar/top/pointcloud_raw\ntopic_imu: /sensing/gnss/sbg/ros/imu/data\n\nLidarModel: VLP_16_SIMU\n\nselected_segment:\n- {\n      start_time: 0,\n      end_time: 40,\n      path_bag: /root/calib_ws/src/OA-LICalib/data/rosbag2_2023_08_18-14_42_12_0.bag,\n}\n

    After creating the container and changing parameters, you can build and run the calibration tool:

    cd /root/catkin_oa_calib\ncatkin_make -DCATKIN_WHITELIST_PACKAGES=\"\"\n\nsource devel/setup.bash\nroslaunch oalicalib li_calib.launch\n

    After running the calibration tool, you can track the calibration process with connecting to the container on other terminal. To connect to the container, you can run the following command:

    xhost +local:docker\ndocker exec -it <container_name> bash\n

    Warning

    You need to replace with the name of your container. To see your container name, you can run docker ps command. This command's output should be something like this and you can find your container name in the last column:

    CONTAINER ID   IMAGE      COMMAND                  CREATED         STATUS         PORTS     NAMES\nadb8b559c06e   calib:v1   \"/ros_entrypoint.sh \u2026\"   6 seconds ago   Up 5 seconds             your_awesome_container_name\n

    After connecting to the container, you can see the calibration process with running the Rviz. After running the Rviz, you need to add the following topics to the Rviz:

    • /ndt_odometry/global_map
    • /ndt_odometry/cur_cloud
    rviz\n

    If /ndt_odometry/global_map looks distorted, you should tune ndt parameters in the OA-LICalib/config/simu.yaml file.

    Lidar - IMU Calibration RViz Screen

    To achieve better results, you can tune the parameters in the config/simu.yaml file. The parameters are explained below:

    Parameter Value ndtResolution Resolution of NDT grid structure (VoxelGridCovariance)0,5 for indoor case and 1.0 for outdoor case ndt_key_frame_downsample Resolution parameter for voxel grid downsample function map_downsample_size Resolution parameter for voxel grid downsample function knot_distance time interval plane_motion set true if you collect data from vehicle gyro_weight gyrometer sensor output\u2019s weight for trajectory estimation accel_weight accelerometer sensor output\u2019s weight for trajectory estimation lidar_weight lidar sensor output\u2019s weight for trajectory estimation"},{"location":"how-to-guides/integrating-autoware/creating-vehicle-and-sensor-model/calibrating-sensors/lidar-lidar-calibration/","title":"Lidar-Lidar calibration","text":""},{"location":"how-to-guides/integrating-autoware/creating-vehicle-and-sensor-model/calibrating-sensors/lidar-lidar-calibration/#lidar-lidar-calibration","title":"Lidar-Lidar calibration","text":""},{"location":"how-to-guides/integrating-autoware/creating-vehicle-and-sensor-model/calibrating-sensors/lidar-lidar-calibration/#overview","title":"Overview","text":"

    In this tutorial, we will explain lidar-lidar calibration over mapping-based lidar-lidar calibration tool of TIER IV's CalibrationTools.

    Warning

    Please obtain the initial calibration results from the Manual Calibration section. This is crucial for obtaining accurate results from this tool. We will utilize the initial calibration parameters that were calculated in the previous step of this tutorial. To apply these initial values in the calibration tools, please update your sensor calibration files within the individual parameter package.

    We need a sample bag file for the lidar-lidar calibration process which includes raw lidar topics. Also, we recommend using an outlier-filtered point cloud for mapping because this point cloud includes a cropped vehicle point cloud. Therefore, vehicle points are not included in the map. When you start the bag recording, you should not move the vehicle for the first 5 seconds for better mapping performace. The following shows an example of a bag file used for this calibration:

    ROS 2 Bag example of our calibration process for tutorial_vehicle
    Files:             rosbag2_2023_09_05-11_23_50_0.db3\nBag size:          3.8 GiB\nStorage id:        sqlite3\nDuration:          112.702s\nStart:             Sep  5 2023 11:23:51.105 (1693902231.105)\nEnd:               Sep  5 2023 11:25:43.808 (1693902343.808)\nMessages:          2256\nTopic information: Topic: /sensing/lidar/front/pointcloud_raw | Type: sensor_msgs/msg/PointCloud2 | Count: 1128 | Serialization Format: cdr\n                   Topic: /sensing/lidar/top/pointcloud | Type: sensor_msgs/msg/PointCloud2 | Count: 1128 | Serialization Format: cdr\n
    "},{"location":"how-to-guides/integrating-autoware/creating-vehicle-and-sensor-model/calibrating-sensors/lidar-lidar-calibration/#mapping-based-lidar-lidar-calibration","title":"Mapping-based lidar-lidar calibration","text":""},{"location":"how-to-guides/integrating-autoware/creating-vehicle-and-sensor-model/calibrating-sensors/lidar-lidar-calibration/#creating-launch-files","title":"Creating launch files","text":"

    We start with creating launch file four our vehicle like Extrinsic Manual Calibration process:

    cd <YOUR-OWN-AUTOWARE-DIRECTORY>/src/autoware/calibration_tools/sensor\ncd extrinsic_calibration_manager/launch\ncd <YOUR-OWN-SENSOR-KIT-NAME> # i.e. for our guide, it will ve cd tutorial_vehicle_sensor_kit which is created in manual calibration\ntouch mapping_based.launch.xml mapping_based_sensor_kit.launch.xml\n

    We will be modifying these mapping_based.launch.xml and mapping_based_sensor_kit.launch.xml by using TIER IV's sample sensor kit aip_x1. So, you should copy the contents of these two files from aip_x1 to your created files.

    Then we will continue with adding vehicle_id and sensor model names to the mapping_based.launch.xml: (Optionally, values are not important. These parameters will be overridden by launch arguments)

      <arg name=\"vehicle_id\" default=\"default\"/>\n\n  <let name=\"sensor_model\" value=\"aip_x1\"/>\n+ <?xml version=\"1.0\" encoding=\"UTF-8\"?>\n+ <launch>\n-   <arg name=\"vehicle_id\" default=\"default\"/>\n+   <arg name=\"vehicle_id\" default=\"<YOUR_VEHICLE_ID>\"/>\n+\n-   <arg name=\"sensor_model\" default=\"aip_x1\"/>\n+   <let name=\"sensor_model\" value=\"<YOUR_SENSOR_KIT_NAME>\"/>\n

    The final version of the file (mapping_based.launch.xml) for tutorial_vehicle should be like this:

    Sample mapping_based.launch.xml file for tutorial vehicle
    <?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<launch>\n<arg name=\"vehicle_id\" default=\"tutorial_vehicle\"/>\n<let name=\"sensor_model\" value=\"tutorial_vehicle_sensor_kit\"/>\n<arg name=\"rviz\" default=\"true\"/>\n\n<group>\n<push-ros-namespace namespace=\"sensor_kit\"/>\n<include file=\"$(find-pkg-share extrinsic_calibration_manager)/launch/$(var sensor_model)/mapping_based_sensor_kit.launch.xml\">\n<arg name=\"vehicle_id\" value=\"$(var vehicle_id)\"/>\n<arg name=\"rviz\" value=\"$(var rviz)\"/>\n</include>\n</group>\n</launch>\n

    After the completing of mapping_based.launch.xml file, we will be ready to implement mapping_based_sensor_kit.launch.xml for the own sensor model.

    Optionally, you can modify sensor_kit and vehicle_id as mapping_based.launch.xmlover this xml snippet: (You can change rviz_profile path after the saving rviz config as video which included at the end of the page)

    We will add sensor kit frames for each lidar (except mapping lidar), we have one lidar for pairing to the main lidar sensor for tutorial vehicle, so it should be like:

    Note: The mapping lidar will be used for mapping purposes, but it will not be calibrated. We can consider this lidar as the main sensor for our hardware architecture. Therefore, other lidars will be calibrated with respect to the mapping lidar (main sensor).

    +  <let name=\"lidar_calibration_sensor_kit_frames\" value=\"[\n+  sensor_kit_base_link,\n+  sensor_kit_base_link,\n+  sensor_kit_base_link\n+  ...]\"/>\n

    If you save rviz config file before for the lidar-lidar calibration process:

    - <let name=\"rviz_profile\" value=\"$(find-pkg-share extrinsic_mapping_based_calibrator)/rviz/x1.rviz\"/>\n+ <let name=\"rviz_profile\" value=\"$(find-pkg-share extrinsic_mapping_based_calibrator)/rviz/<YOUR-RVIZ-CONFIG>.rviz\"/>\n
    i.e., If you have one main lidar for mapping, three lidar for calibration
    +  <let name=\"lidar_calibration_sensor_kit_frames\" value=\"[\n+  sensor_kit_base_link,\n+  sensor_kit_base_link,\n+  sensor_kit_base_link]\"/>\n
    i.e., For tutorial_vehicle (one lidar main for mapping, one lidar for calibration)
    +  <let name=\"lidar_calibration_sensor_kit_frames\" value=\"[sensor_kit_base_link]\"/>\n

    We will add lidar_calibration_service_names, calibration_lidar_base_frames and calibration_lidar_frames for calibrator:

    -   <let\n-       name=\"lidar_calibration_service_names\"\n-       value=\"[\n-       /sensor_kit/sensor_kit_base_link/livox_front_left_base_link,\n-       /sensor_kit/sensor_kit_base_link/livox_front_center_base_link,\n-       /sensor_kit/sensor_kit_base_link/livox_front_right_base_link]\"\n-     />\n-   <let name=\"calibration_lidar_base_frames\" value=\"[\n-       livox_front_left_base_link,\n-       livox_front_center_base_link,\n-       livox_front_right_base_link]\"/>\n-   <let name=\"calibration_lidar_frames\" value=\"[\n-       livox_front_left,\n-       livox_front_center,\n-       livox_front_right]\"/>\n+   <let\n+           name=\"lidar_calibration_service_names\"\n+           value=\"[/sensor_kit/sensor_kit_base_link/<YOUR_SENSOR_BASE_LINK>,\n+                   /sensor_kit/sensor_kit_base_link/<YOUR_SENSOR_BASE_LINK\n+                   ...]\"\n+   />\n+\n+   <let name=\"calibration_lidar_base_frames\" value=\"[YOUR_SENSOR_BASE_LINK,\n+                                                     YOUR_SENSOR_BASE_LINK\n+                                                     ...]\"/>\n+   <let name=\"calibration_lidar_frames\" value=\"[YOUR_SENSOR_LINK,\n+                                                YOUR_SENSOR_LINK\n+                                                ...]\"/>\n
    i.e., At the tutorial_vehicle it should be like this snippet
    +   <let\n+           name=\"lidar_calibration_service_names\"\n+           value=\"[/sensor_kit/sensor_kit_base_link/rs_bpearl_front_base_link]\"\n+   />\n\n+   <let name=\"calibration_lidar_base_frames\" value=\"[rs_bpearl_front_base_link]\"/>\n+   <let name=\"calibration_lidar_frames\" value=\"[rs_bpearl_front]\"/>\n

    After that, we will add the sensor topics and sensor frames in order to do that, we will continue filling the mapping_based_sensor_kit.launch.xml with (we recommend using the /sensing/lidar/top/pointcloud topic as the mapping pointcloud because the vehicle cloud is cropped at this topic by pointcloud preprocessing):

    -     <let name=\"mapping_lidar_frame\" value=\"velodyne_top\"/>\n-     <let name=\"mapping_pointcloud\" value=\"/sensing/lidar/top/pointcloud\"/>\n+     <let name=\"mapping_lidar_frame\" value=\"<MAPPING_LIDAR_SENSOR_LINK>\"/>\n+     <let name=\"mapping_pointcloud\" value=\"<MAPPING_LIDAR_POINTCLOUD_TOPIC_NAME>\"/>\n\n\n-     <let name=\"calibration_pointcloud_topics\" value=\"[\n-       /sensing/lidar/front_left/livox/lidar,\n-       /sensing/lidar/front_center/livox/lidar,\n-       /sensing/lidar/front_right/livox/lidar]\"/>\n+     <let name=\"calibration_pointcloud_topics\" value=\"[\n+       <YOUR_LIDAR_TOPIC_FOR_CALIBRATION>,\n+       <YOUR_LIDAR_TOPIC_FOR_CALIBRATION>,\n+       ...]\"/>\n
    At the tutorial_vehicle it should be like this snippet.
      <let name=\"calibration_lidar_base_frames\" value=\"[rs_bpearl_front_base_link]\"/>\n<let name=\"calibration_lidar_frames\" value=\"[rs_bpearl_front]\"/>\n\n<let name=\"mapping_lidar_frame\" value=\"rs_helios_top\"/>\n<let name=\"mapping_pointcloud\" value=\"/sensing/lidar/top/pointcloud_raw\"/>\n<let name=\"detected_objects\" value=\"/perception/object_recognition/detection/objects\"/>\n\n<let name=\"calibration_pointcloud_topics\" value=\"[\n/sensing/lidar/right/pointcloud_raw]\"/>\n

    The mapping_based_sensor_kit.launch.xml launch file for tutorial_vehicle should be this:

    i.e. mapping_based_sensor_kit.launch.xml for tutorial_vehicle
    <?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<launch>\n<arg name=\"vehicle_id\" default=\"tutorial_vehicle\"/>\n<let name=\"sensor_model\" value=\"tutorial_vehicle_sensor_kit\"/>\n\n<arg name=\"rviz\"/>\n<let name=\"rviz_profile\" value=\"$(find-pkg-share extrinsic_mapping_based_calibrator)/rviz/x1.rviz\"/>\n\n<arg name=\"src_yaml\" default=\"$(find-pkg-share individual_params)/config/$(var vehicle_id)/$(var sensor_model)/sensor_kit_calibration.yaml\"/>\n<arg name=\"dst_yaml\" default=\"$(env HOME)/sensor_kit_calibration.yaml\"/>\n\n<let name=\"camera_calibration_service_names\" value=\"['']\"/>\n\n<let name=\"camera_calibration_sensor_kit_frames\" value=\"['']\"/>\n<let name=\"calibration_camera_frames\" value=\"['']\"/>\n<let name=\"calibration_camera_optical_link_frames\" value=\"['']\"/>\n<let name=\"calibration_camera_info_topics\" value=\"['']\"/>\n\n<let name=\"calibration_image_topics\" value=\"['']\"/>\n\n<let name=\"lidar_calibration_sensor_kit_frames\" value=\"[sensor_kit_base_link]\"/>\n\n<let\nname=\"lidar_calibration_service_names\"\nvalue=\"[/sensor_kit/sensor_kit_base_link/rs_bpearl_front_base_link]\"\n/>\n\n<let name=\"calibration_lidar_base_frames\" value=\"[rs_bpearl_front_base_link]\"/>\n<let name=\"calibration_lidar_frames\" value=\"[rs_bpearl_front]\"/>\n\n<let name=\"mapping_lidar_frame\" value=\"rs_helios_top\"/>\n<let name=\"mapping_pointcloud\" value=\"/sensing/lidar/top/pointcloud_raw\"/>\n<let name=\"detected_objects\" value=\"/perception/object_recognition/detection/objects\"/>\n\n<let name=\"calibration_pointcloud_topics\" value=\"[\n/sensing/lidar/right/pointcloud_raw]\"/>\n\n<group>\n<!-- extrinsic_calibration_client -->\n<node pkg=\"extrinsic_calibration_client\" exec=\"extrinsic_calibration_client\" name=\"extrinsic_calibration_client\" output=\"screen\">\n<param name=\"src_path\" value=\"$(var src_yaml)\"/>\n<param name=\"dst_path\" value=\"$(var dst_yaml)\"/>\n</node>\n\n<!-- extrinsic_calibration_manager -->\n<node pkg=\"extrinsic_calibration_manager\" exec=\"extrinsic_calibration_manager\" name=\"extrinsic_calibration_manager\" output=\"screen\">\n<param name=\"parent_frame\" value=\"sensor_kit_base_link\"/>\n<param name=\"child_frames\" value=\"[rs_bpearl_front_base_link]\"/>\n</node>\n</group>\n\n<!-- mapping based calibrator -->\n<include file=\"$(find-pkg-share extrinsic_mapping_based_calibrator)/launch/calibrator.launch.xml\">\n<arg name=\"ns\" value=\"\"/>\n\n<arg name=\"camera_calibration_service_names\" value=\"$(var camera_calibration_service_names)\"/>\n<arg name=\"lidar_calibration_service_names\" value=\"$(var lidar_calibration_service_names)\"/>\n<arg name=\"camera_calibration_sensor_kit_frames\" value=\"$(var camera_calibration_sensor_kit_frames)\"/>\n<arg name=\"lidar_calibration_sensor_kit_frames\" value=\"$(var lidar_calibration_sensor_kit_frames)\"/>\n<arg name=\"calibration_camera_frames\" value=\"$(var calibration_camera_frames)\"/>\n<arg name=\"calibration_camera_optical_link_frames\" value=\"$(var calibration_camera_optical_link_frames)\"/>\n<arg name=\"calibration_lidar_base_frames\" value=\"$(var calibration_lidar_base_frames)\"/>\n<arg name=\"calibration_lidar_frames\" value=\"$(var calibration_lidar_frames)\"/>\n<arg name=\"mapping_lidar_frame\" value=\"$(var mapping_lidar_frame)\"/>\n\n<arg name=\"mapping_pointcloud\" value=\"$(var mapping_pointcloud)\"/>\n<arg name=\"detected_objects\" value=\"$(var detected_objects)\"/>\n\n<arg name=\"calibration_camera_info_topics\" value=\"$(var calibration_camera_info_topics)\"/>\n<arg name=\"calibration_image_topics\" value=\"$(var calibration_image_topics)\"/>\n<arg name=\"calibration_pointcloud_topics\" value=\"$(var calibration_pointcloud_topics)\"/>\n\n<arg name=\"mapping_max_range\" value=\"150.0\"/>\n<arg name=\"local_map_num_keyframes\" value=\"30\"/>\n<arg name=\"dense_pointcloud_num_keyframes\" value=\"20\"/>\n<arg name=\"ndt_resolution\" value=\"0.5\"/>\n<arg name=\"ndt_max_iterations\" value=\"100\"/>\n<arg name=\"ndt_epsilon\" value=\"0.005\"/>\n<arg name=\"lost_frame_max_acceleration\" value=\"15.0\"/>\n<arg name=\"lidar_calibration_max_frames\" value=\"10\"/>\n<arg name=\"calibration_eval_max_corr_distance\" value=\"0.2\"/>\n<arg name=\"solver_iterations\" value=\"100\"/>\n<arg name=\"calibration_skip_keyframes\" value=\"15\"/>\n</include>\n\n<node pkg=\"rviz2\" exec=\"rviz2\" name=\"rviz2\" output=\"screen\" args=\"-d $(var rviz_profile)\" if=\"$(var rviz)\"/>\n</launch>\n
    "},{"location":"how-to-guides/integrating-autoware/creating-vehicle-and-sensor-model/calibrating-sensors/lidar-lidar-calibration/#lidar-lidar-calibration-process-with-interactive-mapping-based-calibrator","title":"Lidar-Lidar calibration process with interactive mapping-based calibrator","text":"

    After completing mapping_based.launch.xml and mapping_based_sensor_kit.launch.xml launch files for own sensor kit; now we are ready to calibrate our lidars. First of all, we need to build extrinsic_calibration_manager package:

    colcon build --symlink-install --cmake-args -DCMAKE_BUILD_TYPE=Release --packages-select extrinsic_calibration_manager\n

    So, we are ready to launch and use mapping-based lidar-lidar calibrator:

    ros2 launch extrinsic_calibration_manager calibration.launch.xml mode:=mapping_based sensor_model:=<OWN-SENSOR-KIT> vehicle_model:=<OWN-VEHICLE-MODEL> vehicle_id:=<VEHICLE-ID>\n

    For tutorial vehicle:

    ros2 launch extrinsic_calibration_manager calibration.launch.xml mode:=mapping_based sensor_model:=tutorial_vehicle_sensor_kit vehicle_model:=tutorial_vehicle vehicle_id:=tutorial_vehicle\n

    You will show the rviz2 screen with several configurations, you need to update it with your sensor information topics like the video, which included an end of the document. Also, you can save the rviz2 config on rviz directory, so you can use it later with modifying mapping_based_sensor_kit.launch.xml.

    extrinsic_ground_plane_calibrator/\n   \u2514\u2500 rviz/\n+        \u2514\u2500 tutorial_vehicle_sensor_kit.rviz\n

    Then play ROS 2 bag file:

    ros2 bag play <rosbag_path> --clock -r 0.2 \\\n--remap /tf:=/null/tf /tf_static:=/null/tf_static # if tf is recorded\n

    The calibration step consists of two phases: mapping and calibration. At the bag starts playing, then mapping starts as well as the rviz2 screenshot below.

    So, red arrow markers indicate poses during mapping, green arrow markers are special poses taken uniformly, and white points indicate the constructed map.

    Mapping halts either upon reaching a predefined data threshold or can be prematurely concluded by invoking this service:

    ros2 service call /NAMESPACE/stop_mapping std_srvs/srv/Empty {}\n

    After the mapping phase of calibration is completed, then the calibration process will start. After the calibration is completed, then you should rviz2 screen like the image below:

    The red points indicate pointcloud that initial calibration results of previous section. The green points indicate aligned point (calibration result). The calibration results will be saved automatically on your dst_yaml ($HOME/sensor_kit_calibration.yaml) at this tutorial.

    Here is the video for demonstrating the mapping-based lidar-lidar calibration process on tutorial_vehicle:

    "},{"location":"how-to-guides/integrating-autoware/creating-vehicle-and-sensor-model/creating-individual-params/","title":"Creating individual params","text":""},{"location":"how-to-guides/integrating-autoware/creating-vehicle-and-sensor-model/creating-individual-params/#creating-individual-params","title":"Creating individual params","text":""},{"location":"how-to-guides/integrating-autoware/creating-vehicle-and-sensor-model/creating-individual-params/#introduction","title":"Introduction","text":"

    The individual_params package is used to define customized sensor calibrations for different vehicles. It lets you define customized sensor calibrations for different vehicles while using the same launch files with the same sensor model.

    Warning

    The \"individual_params\" package contains the calibration results for your sensor kit and overrides the default calibration results found in VEHICLE-ID_sensor_kit_description/config/ directory.

    "},{"location":"how-to-guides/integrating-autoware/creating-vehicle-and-sensor-model/creating-individual-params/#placing-your-individual_parameters-repository-inside-autoware","title":"Placing your individual_parameters repository inside Autoware","text":"

    Previously on this guide, we forked the autoware_individual_params repository to create a tutorial_vehicle_individual_params repository which will be used as an example for this section of the guide. Your individual_parameters repository should be placed inside your Autoware folder following the same folder structure as the one shown below:

    sample folder structure for tutorial_vehicle_individual_params
      <YOUR-OWN-AUTOWARE-DIR>/\n  \u2514\u2500 src/\n  \u2514\u2500 param/\n  \u2514\u2500 tutorial_vehicle_individual_params/\n  \u2514\u2500 individual_params/\n  \u2514\u2500 config/\n  \u251c\u2500 default/\n+ \u2514\u2500 tutorial_vehicle/\n+     \u2514\u2500 tutorial_vehicle_sensor_kit_launch/\n+         \u251c\u2500 imu_corrector.param.yaml\n+         \u251c\u2500 sensor_kit_calibration.yaml\n+         \u2514\u2500 sensors_calibration.yaml\n

    After that, we need to build our individual_params package:

    colcon build --symlink-install --cmake-args -DCMAKE_BUILD_TYPE=Release --packages-up-to individual_params\n

    Now you are ready to use Autoware with a vehicle_id as an argument. For example, if you are several, similar vehicles with different sensor calibration requirements, your autoware_individual_params structure should look like this:

    individual_params/\n\u2514\u2500 config/\n     \u251c\u2500 default/\n     \u2502   \u2514\u2500 <YOUR_SENSOR_KIT>/                  # example1\n     \u2502        \u251c\u2500 imu_corrector.param.yaml\n     \u2502        \u251c\u2500 sensor_kit_calibration.yaml\n     \u2502        \u2514\u2500 sensors_calibration.yaml\n+    \u251c\u2500 VEHICLE_1/\n+    \u2502   \u2514\u2500 <YOUR_SENSOR_KIT>/                  # example2\n+    \u2502        \u251c\u2500 imu_corrector.param.yaml\n+    \u2502        \u251c\u2500 sensor_kit_calibration.yaml\n+    \u2502        \u2514\u2500 sensors_calibration.yaml\n+    \u2514\u2500 VEHICLE_2/\n+         \u2514\u2500 <YOUR_SENSOR_KIT>/                  # example3\n+              \u251c\u2500 imu_corrector.param.yaml\n+              \u251c\u2500 sensor_kit_calibration.yaml\n+              \u2514\u2500 sensors_calibration.yaml\n

    Then, you can use autoware with vehicle_id arguments like this:

    Add a <vehicle_id> as an argument and switch parameters using options at startup.

    # example1 (do not set vehicle_id)\n$ ros2 launch autoware_launch autoware.launch.xml sensor_model:=<YOUR-VEHICLE-NAME>_sensor_kit vehicle_model:=<YOUR-VEHICLE-NAME>_vehicle\n# example2 (set vehicle_id as VEHICLE_1)\n$ ros2 launch autoware_launch autoware.launch.xml sensor_model:=<YOUR-VEHICLE-NAME>_sensor_kit vehicle_model:=<YOUR-VEHICLE-NAME>_vehicle vehicle_id:=VEHICLE_1\n# example3 (set vehicle_id as VEHICLE_2)\n$ ros2 launch autoware_launch autoware.launch.xml sensor_model:=<YOUR-VEHICLE-NAME>_sensor_kit vehicle_model:=<YOUR-VEHICLE-NAME>_vehicle vehicle_id:=VEHICLE_2\n
    "},{"location":"how-to-guides/integrating-autoware/creating-vehicle-and-sensor-model/creating-sensor-model/","title":"Creating a sensor model for Autoware","text":""},{"location":"how-to-guides/integrating-autoware/creating-vehicle-and-sensor-model/creating-sensor-model/#creating-a-sensor-model-for-autoware","title":"Creating a sensor model for Autoware","text":""},{"location":"how-to-guides/integrating-autoware/creating-vehicle-and-sensor-model/creating-sensor-model/#introduction","title":"Introduction","text":"

    This page introduces the following packages for the sensor model:

    1. common_sensor_launch
    2. <YOUR-VEHICLE-NAME>_sensor_kit_description
    3. <YOUR-VEHICLE-NAME>_sensor_kit_launch

    Previously, we forked our sensor model at the creating autoware repositories page step. For instance, we created tutorial_vehicle_sensor_kit_launch as an implementation example for the said step. Please ensure that the _sensor_kit_launch repository is included in Autoware, following the directory structure below:

    <YOUR-OWN-AUTOWARE-DIR>/\n  \u2514\u2500 src/\n       \u2514\u2500 sensor_kit/\n            \u2514\u2500 <YOUR-VEHICLE-NAME>_sensor_kit_launch/\n                 \u251c\u2500 common_sensor_launch/\n                 \u251c\u2500 <YOUR-VEHICLE-NAME>_sensor_kit_description/\n                 \u2514\u2500 <YOUR-VEHICLE-NAME>_sensor_kit_launch/\n

    If your forked Autoware meta-repository doesn't include <YOUR-VEHICLE-NAME>_sensor_kit_launch with the correct folder structure as shown above, please add your forked <YOUR-VEHICLE-NAME>_sensor_kit_launch repository to the autoware.repos file and run the vcs import src < autoware.repos command in your terminal to import the newly included repositories at autoware.repos file.

    Now, we are ready to modify the following sensor model packages for our vehicle. Firstly, we need to rename the description and launch packages:

    <YOUR-VEHICLE-NAME>_sensor_kit_launch/\n  \u251c\u2500 common_sensor_launch/\n- \u251c\u2500 sample_sensor_kit_description/\n+ \u251c\u2500 <YOUR-VEHICLE-NAME>_sensor_kit_description/\n- \u2514\u2500 sample_sensor_kit_launch/\n+ \u2514\u2500 <YOUR-VEHICLE-NAME>_sensor_kit_launch/\n

    After that, we will change our package names in the package.xml file and CMakeLists.txt file of the sample_sensor_kit_description and sample_sensor_kit_launch packages. So, open the package.xml file and CMakeLists.txt file with any text editor or IDE of your preference and perform the following changes:

    Change the <name> attribute at package.xml file:

    <package format=\"3\">\n- <name>sample_sensor_kit_description</name>\n+ <name><YOUR-VEHICLE-NAME>_sensor_kit_description</name>\n <version>0.1.0</version>\n  <description>The sensor_kit_description package</description>\n  ...\n  ...\n

    Change the project() method at CmakeList.txt file.

      cmake_minimum_required(VERSION 3.5)\n- project(sample_sensor_kit_description)\n+ project(<YOUR-VEHICLE-NAME>_sensor_kit_description)\n\n find_package(ament_cmake_auto REQUIRED)\n...\n...\n

    Remember to apply the name changes and project method for BOTH <YOUR-VEHICLE-NAME>_sensor_kit_descriptionand <YOUR-VEHICLE-NAME>_sensor_kit_launch ROS 2 packages. Once finished, we can proceed to build said packages:

    cd <YOUR-AUTOWARE-DIR>\ncolcon build --symlink-install --cmake-args -DCMAKE_BUILD_TYPE=Release --packages-up-to <YOUR-VEHICLE-NAME>_sensor_kit_description <YOUR-VEHICLE-NAME>_sensor_kit_launch\n
    "},{"location":"how-to-guides/integrating-autoware/creating-vehicle-and-sensor-model/creating-sensor-model/#sensor-description","title":"Sensor description","text":"

    The main purpose of this package is to describe the sensor frame IDs, calibration parameters of all sensors, and their links with urdf files.

    The folder structure of sensor_kit_description package is:

    <YOUR-VEHICLE-NAME>_sensor_kit_description/\n   \u251c\u2500 config/\n   \u2502     \u251c\u2500 sensor_kit_calibration.yaml\n   \u2502     \u2514\u2500 sensors_calibration.yaml\n   \u2514\u2500 urdf/\n         \u251c\u2500 sensor_kit.xacro\n         \u2514\u2500 sensors.xacro\n

    Now, we will modify these files according to our sensor design.

    "},{"location":"how-to-guides/integrating-autoware/creating-vehicle-and-sensor-model/creating-sensor-model/#sensor_kit_calibrationyaml","title":"sensor_kit_calibration.yaml","text":"

    This file defines the mounting positions and orientations of sensors with sensor_kit_base_link as the parent frame. We can assume sensor_kit_base_link frame is bottom of your main Lidar sensor. We must create this file with euler format as [x, y, z, roll, pitch, yaw]. Also, we will set these values with \"0\" until the calibration steps.

    We will define new frames for this file, and we will connect them .xacro files. We recommend naming as if your lidar sensor frame as \"velodyne_top\", you can add \"_base_link\" to our calibration .yaml file.

    So, the sample file must be like:

    sensor_kit_base_link:\nvelodyne_top_base_link:\nx: 0.000000\ny: 0.000000\nz: 0.000000\nroll: 0.000000\npitch: 0.000000\nyaw: 0.000000\ncamera0/camera_link:\nx: 0.000000\ny: 0.000000\nz: 0.000000\nroll: 0.000000\npitch: 0.000000\nyaw: 0.000000\n...\n...\n

    This file for tutorial_vehicle was created for one camera, two lidars and one GNSS/INS sensors.

    sensor_kit_calibration.yaml for tutorial_vehicle_sensor_kit_description
    sensor_kit_base_link:\ncamera0/camera_link: # Camera\nx: 0.0\ny: 0.0\nz: 0.0\nroll: 0.0\npitch: 0.0\nyaw: 0.0\nrs_helios_top_base_link: # Lidar\nx: 0.0\ny: 0.0\nz: 0.0\nroll: 0.0\npitch: 0.0\nyaw: 0.0\nrs_bpearl_front_base_link: # Lidar\nx: 0.0\ny: 0.0\nz: 0.0\nroll: 0.0\npitch: 0.0\nyaw: 0.0\nGNSS_INS/gnss_ins_link: # GNSS/INS\nx: 0.0\ny: 0.0\nz: 0.0\nroll: 0.0\npitch: 0.0\nyaw: 0.0\n
    "},{"location":"how-to-guides/integrating-autoware/creating-vehicle-and-sensor-model/creating-sensor-model/#sensors_calibrationyaml","title":"sensors_calibration.yaml","text":"

    This file defines the mounting positions and orientations of sensor_kit_base_link (child frame) with base_link as the parent frame. At Autoware, base_link is on projection of the rear-axle center onto the ground surface. For more information, you can check vehicle dimension page. You can use CAD values for this, but we will fill the values with 0 for now.

    base_link:\nsensor_kit_base_link:\nx: 0.000000\ny: 0.000000\nz: 0.000000\nroll: 0.000000\npitch: 0.000000\nyaw: 0.000000\n

    Now, we are ready to implement .xacro files. These files provide linking our sensor frames and adding sensor urdf files

    "},{"location":"how-to-guides/integrating-autoware/creating-vehicle-and-sensor-model/creating-sensor-model/#sensor_kitxacro","title":"sensor_kit.xacro","text":"

    We will add our sensors and remove unnecessary xacros from this file. For example, we want to add our lidar sensor with velodyne_top frame from the sensor driver, we will add the following xacro to our sensor_kit.xacro file. Please add your sensors to this file and remove unnecessary sensor's xacros.

        <!-- lidar -->\n<xacro:VLS-128 parent=\"sensor_kit_base_link\" name=\"velodyne_top\" topic=\"/points_raw\" hz=\"10\" samples=\"220\" gpu=\"$(arg gpu)\">\n<origin\nxyz=\"${calibration['sensor_kit_base_link']['velodyne_top_base_link']['x']}\n                 ${calibration['sensor_kit_base_link']['velodyne_top_base_link']['y']}\n                 ${calibration['sensor_kit_base_link']['velodyne_top_base_link']['z']}\"\nrpy=\"${calibration['sensor_kit_base_link']['velodyne_top_base_link']['roll']}\n                 ${calibration['sensor_kit_base_link']['velodyne_top_base_link']['pitch']}\n                 ${calibration['sensor_kit_base_link']['velodyne_top_base_link']['yaw']}\"\n/>\n</xacro:VLS-128>\n

    Here is the sample xacro file for tutorial_vehicle with one camera, two lidars and one GNSS/INS sensors.

    sensor_kit.xacro for tutorial_vehicle_sensor_kit_description
    <?xml version=\"1.0\"?>\n<robot xmlns:xacro=\"http://ros.org/wiki/xacro\">\n<xacro:macro name=\"sensor_kit_macro\" params=\"parent x y z roll pitch yaw\">\n<xacro:include filename=\"$(find velodyne_description)/urdf/VLP-16.urdf.xacro\"/>\n<xacro:include filename=\"$(find vls_description)/urdf/VLS-128.urdf.xacro\"/>\n<xacro:include filename=\"$(find camera_description)/urdf/monocular_camera.xacro\"/>\n<xacro:include filename=\"$(find imu_description)/urdf/imu.xacro\"/>\n\n<xacro:arg name=\"gpu\" default=\"false\"/>\n<xacro:arg name=\"config_dir\" default=\"$(find tutorial_vehicle_sensor_kit_description)/config\"/>\n\n<xacro:property name=\"sensor_kit_base_link\" default=\"sensor_kit_base_link\"/>\n\n<joint name=\"${sensor_kit_base_link}_joint\" type=\"fixed\">\n<origin rpy=\"${roll} ${pitch} ${yaw}\" xyz=\"${x} ${y} ${z}\"/>\n<parent link=\"${parent}\"/>\n<child link=\"${sensor_kit_base_link}\"/>\n</joint>\n<link name=\"${sensor_kit_base_link}\">\n<origin rpy=\"0 0 0\" xyz=\"0 0 0\"/>\n</link>\n\n<!-- sensor -->\n<xacro:property name=\"calibration\" value=\"${xacro.load_yaml('$(arg config_dir)/sensor_kit_calibration.yaml')}\"/>\n\n<!-- lidar -->\n<xacro:VLS-128 parent=\"sensor_kit_base_link\" name=\"rs_helios_top\" topic=\"/points_raw\" hz=\"10\" samples=\"220\" gpu=\"$(arg gpu)\">\n<origin\nxyz=\"${calibration['sensor_kit_base_link']['rs_helios_top_base_link']['x']}\n             ${calibration['sensor_kit_base_link']['rs_helios_top_base_link']['y']}\n             ${calibration['sensor_kit_base_link']['rs_helios_top_base_link']['z']}\"\nrpy=\"${calibration['sensor_kit_base_link']['rs_helios_top_base_link']['roll']}\n             ${calibration['sensor_kit_base_link']['rs_helios_top_base_link']['pitch']}\n             ${calibration['sensor_kit_base_link']['rs_helios_top_base_link']['yaw']}\"\n/>\n</xacro:VLS-128>\n<xacro:VLP-16 parent=\"sensor_kit_base_link\" name=\"rs_bpearl_front\" topic=\"/points_raw\" hz=\"10\" samples=\"220\" gpu=\"$(arg gpu)\">\n<origin\nxyz=\"${calibration['sensor_kit_base_link']['rs_bpearl_front_base_link']['x']}\n             ${calibration['sensor_kit_base_link']['rs_bpearl_front_base_link']['y']}\n             ${calibration['sensor_kit_base_link']['rs_bpearl_front_base_link']['z']}\"\nrpy=\"${calibration['sensor_kit_base_link']['rs_bpearl_front_base_link']['roll']}\n             ${calibration['sensor_kit_base_link']['rs_bpearl_front_base_link']['pitch']}\n             ${calibration['sensor_kit_base_link']['rs_bpearl_front_base_link']['yaw']}\"\n/>\n</xacro:VLP-16>\n\n<!-- camera -->\n<xacro:monocular_camera_macro\nname=\"camera0/camera\"\nparent=\"sensor_kit_base_link\"\nnamespace=\"\"\nx=\"${calibration['sensor_kit_base_link']['camera0/camera_link']['x']}\"\ny=\"${calibration['sensor_kit_base_link']['camera0/camera_link']['y']}\"\nz=\"${calibration['sensor_kit_base_link']['camera0/camera_link']['z']}\"\nroll=\"${calibration['sensor_kit_base_link']['camera0/camera_link']['roll']}\"\npitch=\"${calibration['sensor_kit_base_link']['camera0/camera_link']['pitch']}\"\nyaw=\"${calibration['sensor_kit_base_link']['camera0/camera_link']['yaw']}\"\nfps=\"30\"\nwidth=\"800\"\nheight=\"400\"\nfov=\"1.3\"\n/>\n\n<!-- gnss -->\n<xacro:imu_macro\nname=\"gnss\"\nparent=\"sensor_kit_base_link\"\nnamespace=\"\"\nx=\"${calibration['sensor_kit_base_link']['gnss_link']['x']}\"\ny=\"${calibration['sensor_kit_base_link']['gnss_link']['y']}\"\nz=\"${calibration['sensor_kit_base_link']['gnss_link']['z']}\"\nroll=\"${calibration['sensor_kit_base_link']['gnss_link']['roll']}\"\npitch=\"${calibration['sensor_kit_base_link']['gnss_link']['pitch']}\"\nyaw=\"${calibration['sensor_kit_base_link']['gnss_link']['yaw']}\"\nfps=\"100\"\n/>\n\n</xacro:macro>\n</robot>\n
    "},{"location":"how-to-guides/integrating-autoware/creating-vehicle-and-sensor-model/creating-sensor-model/#sensorsxacro","title":"sensors.xacro","text":"

    This files links our sensor_kit main frame (sensor_kit_base_link) to base_link. Also, you have sensors which will be calibrated directly to base_link, you can add it to here.

    Here is the sensors.xacro file for sample_sensor_kit_description package: (velodyne_rear transformation is directly used with base_link)

    <?xml version=\"1.0\"?>\n<robot name=\"vehicle\" xmlns:xacro=\"http://ros.org/wiki/xacro\">\n<xacro:arg name=\"config_dir\" default=\"$(find sample_sensor_kit_description)/config\"/>\n<xacro:property name=\"calibration\" value=\"${xacro.load_yaml('$(arg config_dir)/sensors_calibration.yaml')}\"/>\n\n<!-- sensor kit -->\n<xacro:include filename=\"sensor_kit.xacro\"/>\n<xacro:sensor_kit_macro\nparent=\"base_link\"\nx=\"${calibration['base_link']['sensor_kit_base_link']['x']}\"\ny=\"${calibration['base_link']['sensor_kit_base_link']['y']}\"\nz=\"${calibration['base_link']['sensor_kit_base_link']['z']}\"\nroll=\"${calibration['base_link']['sensor_kit_base_link']['roll']}\"\npitch=\"${calibration['base_link']['sensor_kit_base_link']['pitch']}\"\nyaw=\"${calibration['base_link']['sensor_kit_base_link']['yaw']}\"\n/>\n\n<!-- embedded sensors -->\n<xacro:include filename=\"$(find velodyne_description)/urdf/VLP-16.urdf.xacro\"/>\n<xacro:VLP-16 parent=\"base_link\" name=\"velodyne_rear\" topic=\"velodyne_rear/velodyne_points\" hz=\"10\" samples=\"220\" gpu=\"false\">\n<origin\nxyz=\"${calibration['base_link']['velodyne_rear_base_link']['x']}\n           ${calibration['base_link']['velodyne_rear_base_link']['y']}\n           ${calibration['base_link']['velodyne_rear_base_link']['z']}\"\nrpy=\"${calibration['base_link']['velodyne_rear_base_link']['roll']}\n           ${calibration['base_link']['velodyne_rear_base_link']['pitch']}\n           ${calibration['base_link']['velodyne_rear_base_link']['yaw']}\"\n/>\n</xacro:VLP-16>\n</robot>\n

    At our tutorial vehicle, there is no directly sensor transformation for base_link, thus our sensors.xacro file includes only base_link and sensor_kit_base_link link.

    sensors.xacro for tutorial_vehicle_sensor_kit_description
    <?xml version=\"1.0\"?>\n<robot name=\"vehicle\" xmlns:xacro=\"http://ros.org/wiki/xacro\">\n<xacro:arg name=\"config_dir\" default=\"$(find tutorial_vehicle_sensor_kit_description)/config\"/>\n<xacro:property name=\"calibration\" value=\"${xacro.load_yaml('$(arg config_dir)/sensors_calibration.yaml')}\"/>\n\n<!-- sensor kit -->\n<xacro:include filename=\"sensor_kit.xacro\"/>\n<xacro:sensor_kit_macro\nparent=\"base_link\"\nx=\"${calibration['base_link']['sensor_kit_base_link']['x']}\"\ny=\"${calibration['base_link']['sensor_kit_base_link']['y']}\"\nz=\"${calibration['base_link']['sensor_kit_base_link']['z']}\"\nroll=\"${calibration['base_link']['sensor_kit_base_link']['roll']}\"\npitch=\"${calibration['base_link']['sensor_kit_base_link']['pitch']}\"\nyaw=\"${calibration['base_link']['sensor_kit_base_link']['yaw']}\"\n/>\n</robot>\n

    After the completing sensor_kit_calibration.yaml, sensors_calibration.yaml, sensor_kit.xacro and sensors.xacro file, our sensor description package is finished, we will continue with modifying <YOUR-VEHICLE-NAME>_sensor_kit_launch package.

    "},{"location":"how-to-guides/integrating-autoware/creating-vehicle-and-sensor-model/creating-sensor-model/#sensor-launch","title":"Sensor launch","text":"

    At this package (<YOUR-VEHICLE-NAME>_sensor_kit_launch), we will launch our sensors and their pipelines. So, we will also use common_sensor_launch package for launching the lidar sensing pipeline. This image below demonstrates our sensor pipeline, which we will construct in this section.

    Sample Launch workflow for sensing design.

    The <YOUR-VEHICLE-NAME>_sensor_kit_launch package folder structure like this:

    <YOUR-VEHICLE-NAME>_sensor_kit_launch/\n      \u251c\u2500 config/\n      \u251c\u2500 data/\n      \u2514\u2500 launch/\n+           \u251c\u2500 camera.launch.xml\n+           \u251c\u2500 gnss.launch.xml\n+           \u251c\u2500 imu.launch.xml\n+           \u251c\u2500 lidar.launch.xml\n+           \u251c\u2500 pointcloud_preprocessor.launch.py\n+           \u2514\u2500 sensing.launch.xml\n

    So, we will modify the launch files which located the launch folder for launching and manipulating our sensors. The main launch file is sensing.launch.xml. This launch file launches other sensing launch files. The current autoware sensing launch files design for sensor_kit_launch package is the diagram below.

    Launch file flows over sensing.launch.xml launch file.

    The sensing.launch.xml also launches vehicle_velocity_converter package for converting autoware_auto_vehicle_msgs::msg::VelocityReport message to geometry_msgs::msg::TwistWithCovarianceStamped for gyro_odometer node. So, be sure your vehicle_interface publishes /vehicle/status/velocity_status topic with autoware_auto_vehicle_msgs::msg::VelocityReport type, or you must update input_vehicle_velocity_topic at sensing.launch.xml.

        ...\n    <include file=\"$(find-pkg-share vehicle_velocity_converter)/launch/vehicle_velocity_converter.launch.xml\">\n-     <arg name=\"input_vehicle_velocity_topic\" value=\"/vehicle/status/velocity_status\"/>\n+     <arg name=\"input_vehicle_velocity_topic\" value=\"<YOUR-VELOCITY-STATUS-TOPIC>\"/>\n     <arg name=\"output_twist_with_covariance\" value=\"/sensing/vehicle_velocity_converter/twist_with_covariance\"/>\n    </include>\n    ...\n
    "},{"location":"how-to-guides/integrating-autoware/creating-vehicle-and-sensor-model/creating-sensor-model/#lidar-launching","title":"Lidar Launching","text":"

    Let's start with modifying lidar.launch.xml file for launching our lidar sensor driver with autoware. Please check supported lidar sensors over the nebula driver in the GitHub repository.

    If you are using Velodyne Lidar sensor, you can use the sample_sensor_kit_launch template, but you need to update sensor_id, data_port, sensor_frame and other necessary changes (max_range, scan_phase, etc.).

        <group>\n-     <push-ros-namespace namespace=\"left\"/>\n+     <push-ros-namespace namespace=\"<YOUR-SENSOR-NAMESPACE>\"/>\n     <include file=\"$(find-pkg-share common_sensor_launch)/launch/velodyne_VLP16.launch.xml\">\n        <arg name=\"max_range\" value=\"5.0\"/>\n-       <arg name=\"sensor_frame\" value=\"velodyne_left\"/>\n+       <arg name=\"sensor_frame\" value=\"<YOUR-SENSOR-FRAME>\"/>\n-       <arg name=\"sensor_ip\" value=\"192.168.1.202\"/>\n+       <arg name=\"sensor_ip\" value=\"<YOUR-SENSOR-IP>\"/>\n       <arg name=\"host_ip\" value=\"$(var host_ip)\"/>\n-       <arg name=\"data_port\" value=\"2369\"/>\n+       <arg name=\"data_port\" value=<YOUR-DATA-PORT>/>\n       <arg name=\"scan_phase\" value=\"180.0\"/>\n        <arg name=\"cloud_min_angle\" value=\"300\"/>\n        <arg name=\"cloud_max_angle\" value=\"60\"/>\n        <arg name=\"launch_driver\" value=\"$(var launch_driver)\"/>\n        <arg name=\"vehicle_mirror_param_file\" value=\"$(var vehicle_mirror_param_file)\"/>\n        <arg name=\"use_pointcloud_container\" value=\"$(var use_pointcloud_container)\"/>\n        <arg name=\"container_name\" value=\"$(var pointcloud_container_name)\"/>\n      </include>\n    </group>\n

    Please add similar launch groups according to your sensor architecture. For example, we use Robosense Lidars for our tutorial_vehicle, so the lidar group for Robosense Lidar (i.e., for Bpearl) should be like this structure:

        <group>\n<push-ros-namespace namespace=\"<YOUR-SENSOR-NAMESPACE>\"/>\n<include file=\"$(find-pkg-share common_sensor_launch)/launch/robosense_Bpearl.launch.xml\">\n<arg name=\"max_range\" value=\"30.0\"/>\n<arg name=\"sensor_frame\" value=\"<YOUR-ROBOSENSE-SENSOR-FRAME>\"/>\n<arg name=\"sensor_ip\" value=\"<YOUR-ROBOSENSE-SENSOR-IP>\"/>\n<arg name=\"host_ip\" value=\"$(var host_ip)\"/>\n<arg name=\"data_port\" value=\"<YOUR-ROBOSENSE-SENSOR-DATA-PORT>\"/>\n<arg name=\"gnss_port\" value=\"<YOUR-ROBOSENSE-SENSOR-GNSS-PORT>\"/>\n<arg name=\"scan_phase\" value=\"0.0\"/>\n<arg name=\"launch_driver\" value=\"$(var launch_driver)\"/>\n<arg name=\"vehicle_mirror_param_file\" value=\"$(var vehicle_mirror_param_file)\"/>\n<arg name=\"container_name\" value=\"pointcloud_container\"/>\n</include>\n</group>\n

    If you are using a Hesai lidar (i.e. PandarQT64, please check nebula driver page for supported sensors), you can add the group like this structure at lidar.launch.xml:

        <group>\n<push-ros-namespace namespace=\"<YOUR-SENSOR-NAMESPACE>\"/>\n<include file=\"$(find-pkg-share common_sensor_launch)/launch/hesai_PandarQT64.launch.xml\">\n<arg name=\"max_range\" value=\"100\"/>\n<arg name=\"sensor_frame\" value=\"<YOUR-HESAI-SENSOR-FRAME>\"/>\n<arg name=\"sensor_ip\" value=\"<YOUR-HESAI-SENSOR-IP>\"/>\n<arg name=\"host_ip\" value=\"$(var host_ip)\"/>\n<arg name=\"data_port\" value=\"<YOUR-HESAI-SENSOR-DATA-PORT>\"/>\n<arg name=\"scan_phase\" value=\"0.0\"/>\n<arg name=\"cloud_min_angle\" value=\"0\"/>\n<arg name=\"cloud_max_angle\" value=\"360\"/>\n<arg name=\"launch_driver\" value=\"$(var launch_driver)\"/>\n<arg name=\"vehicle_mirror_param_file\" value=\"$(var vehicle_mirror_param_file)\"/>\n<arg name=\"use_pointcloud_container\" value=\"$(var use_pointcloud_container)\"/>\n<arg name=\"container_name\" value=\"$(var pointcloud_container_name)\"/>\n</include>\n</group>\n

    You can create .launch.xml for common sensor launch, please check hesai_PandarQT64.launch.xml as an example.

    The nebula_node_container.py creates the Lidar pipeline for autoware, the pointcloud preprocessing pipeline is constructed for each lidar please check pointcloud_preprocessor package for filters information as well.

    For example, If you want to change your outlier_filter method, you can modify the pipeline components like this way:

        nodes.append(\n        ComposableNode(\n            package=\"autoware_pointcloud_preprocessor\",\n-           plugin=\"autoware::pointcloud_preprocessor::RingOutlierFilterComponent\",\n-           name=\"ring_outlier_filter\",\n+           plugin=\"autoware::pointcloud_preprocessor::DualReturnOutlierFilterComponent\",\n+           name=\"dual_return_outlier_filter\",\n           remappings=[\n                (\"input\", \"rectified/pointcloud_ex\"),\n                (\"output\", \"pointcloud\"),\n            ],\n            extra_arguments=[{\"use_intra_process_comms\": LaunchConfiguration(\"use_intra_process\")}],\n        )\n    )\n

    We will use the default pointcloud_preprocessor pipeline for our tutorial_vehicle, thus we will not modify nebula_node_container.py.

    "},{"location":"how-to-guides/integrating-autoware/creating-vehicle-and-sensor-model/creating-sensor-model/#camera-launching","title":"Camera Launching","text":"

    In this section, we will launch our camera driver and 2D detection pipeline for Autoware for tutorial_vehicle. The reason we do this is that there is a one computer for tutorial_vehicle. If you are using two or more computers for Autoware, you can launch the camera and 2D detection pipeline separately. For example, you can clone your camera driver at src/sensor_component/external folder (please don't forget adding this driver to autoware.repos file):

    <YOUR-AUTOWARE-DIR>\n   \u2514\u2500 src/\n         \u2514\u2500 sensor_component/\n            \u2514\u2500 external/\n                \u2514\u2500 YOUR-CAMERA-DRIVER\n

    After that, you can just add your camera driver at camera.launch.xml:

        ...\n+   <include file=\"$(find-pkg-share <YOUR-SENSOR-DRIVER>)/launch/YOUR-CAMERA-LAUNCH-FILE\">\n   ...\n

    Then, you can launch tensorrt_yolo node via adding yolo.launch.xml on your design like that: (i.e., it is included in tier4_perception_launch package in autwoare.universe) image_number argument defines your camera number

      <include file=\"$(find-pkg-share tensorrt_yolo)/launch/yolo.launch.xml\">\n<arg name=\"image_raw0\" value=\"$(var image_raw0)\"/>\n<arg name=\"image_raw1\" value=\"$(var image_raw1)\"/>\n<arg name=\"image_raw2\" value=\"$(var image_raw2)\"/>\n<arg name=\"image_raw3\" value=\"$(var image_raw3)\"/>\n<arg name=\"image_raw4\" value=\"$(var image_raw4)\"/>\n<arg name=\"image_raw5\" value=\"$(var image_raw5)\"/>\n<arg name=\"image_raw6\" value=\"$(var image_raw6)\"/>\n<arg name=\"image_raw7\" value=\"$(var image_raw7)\"/>\n<arg name=\"image_number\" value=\"$(var image_number)\"/>\n</include>\n

    Since we are using the same computer for 2D detection pipeline and all Autoware nodes, we will design our camera and 2D detection pipeline using composable nodes and container structure. So, it will decrease our network interface usage. First of all, let's start with our camera sensor: Lucid Vision TRIO54S. We will use this sensor with lucid_vision_driver at the autowarefoundation organization. We can clone this driver src/sensor_component/external folder as well. After the cloning and the building camera driver, we will create \"camera_node_container.launch.py\" for launching camera and tensorrt_yolo node in same container.

    camera_node_container.launch.py launch file for tutorial_vehicle

    ```py import launch from launch.actions import DeclareLaunchArgument from launch.actions import SetLaunchConfiguration from launch.conditions import IfCondition from launch.conditions import UnlessCondition from launch.substitutions.launch_configuration import LaunchConfiguration from launch_ros.actions import ComposableNodeContainer from launch_ros.descriptions import ComposableNode from launch_ros.substitutions import FindPackageShare from launch.actions import OpaqueFunction import yaml

    def launch_setup(context, args, *kwargs):

    output_topic= LaunchConfiguration(\"output_topic\").perform(context)

    image_name = LaunchConfiguration(\"input_image\").perform(context) camera_container_name = LaunchConfiguration(\"camera_container_name\").perform(context) camera_namespace = \"/lucid_vision/\" + image_name

    # tensorrt params gpu_id = int(LaunchConfiguration(\"gpu_id\").perform(context)) mode = LaunchConfiguration(\"mode\").perform(context) calib_image_directory = FindPackageShare(\"tensorrt_yolo\").perform(context) + \"/calib_image/\" tensorrt_config_path = FindPackageShare('tensorrt_yolo').perform(context)+ \"/config/\" + LaunchConfiguration(\"yolo_type\").perform(context) + \".param.yaml\"

    with open(tensorrt_config_path, \"r\") as f: tensorrt_yaml_param = yaml.safe_load(f)[\"/**\"][\"ros__parameters\"]

    camera_param_path=FindPackageShare(\"lucid_vision_driver\").perform(context)+\"/param/\"+image_name+\".param.yaml\" with open(camera_param_path, \"r\") as f: camera_yaml_param = yaml.safe_load(f)[\"/**\"][\"ros__parameters\"]

    container = ComposableNodeContainer( name=camera_container_name, namespace=\"/perception/object_detection\", package=\"rclcpp_components\", executable=LaunchConfiguration(\"container_executable\"), output=\"screen\", composable_node_descriptions=[ ComposableNode( package=\"lucid_vision_driver\", plugin=\"ArenaCameraNode\", name=\"arena_camera_node\", parameters=[{ \"camera_name\": camera_yaml_param['camera_name'], \"frame_id\": camera_yaml_param['frame_id'], \"pixel_format\": camera_yaml_param['pixel_format'], \"serial_no\": camera_yaml_param['serial_no'], \"camera_info_url\": camera_yaml_param['camera_info_url'], \"fps\": camera_yaml_param['fps'], \"horizontal_binning\": camera_yaml_param['horizontal_binning'], \"vertical_binning\": camera_yaml_param['vertical_binning'], \"use_default_device_settings\": camera_yaml_param['use_default_device_settings'], \"exposure_auto\": camera_yaml_param['exposure_auto'], \"exposure_target\": camera_yaml_param['exposure_target'], \"gain_auto\": camera_yaml_param['gain_auto'], \"gain_target\": camera_yaml_param['gain_target'], \"gamma_target\": camera_yaml_param['gamma_target'], \"enable_compressing\": camera_yaml_param['enable_compressing'], \"enable_rectifying\": camera_yaml_param['enable_rectifying'], }], remappings=[ ], extra_arguments=[ {\"use_intra_process_comms\": LaunchConfiguration(\"use_intra_process\")} ], ),

        ComposableNode(\n        namespace='/perception/object_recognition/detection',\n        package=\"tensorrt_yolo\",\n        plugin=\"object_recognition::TensorrtYoloNodelet\",\n        name=\"tensorrt_yolo\",\n        parameters=[\n            {\n                \"mode\": mode,\n                \"gpu_id\": gpu_id,\n                \"onnx_file\": FindPackageShare(\"tensorrt_yolo\").perform(context) +  \"/data/\" + LaunchConfiguration(\"yolo_type\").perform(context) + \".onnx\",\n                \"label_file\": FindPackageShare(\"tensorrt_yolo\").perform(context) + \"/data/\" + LaunchConfiguration(\"label_file\").perform(context),\n                \"engine_file\": FindPackageShare(\"tensorrt_yolo\").perform(context) + \"/data/\"+ LaunchConfiguration(\"yolo_type\").perform(context) + \".engine\",\n                \"calib_image_directory\": calib_image_directory,\n                \"calib_cache_file\": FindPackageShare(\"tensorrt_yolo\").perform(context) + \"/data/\" + LaunchConfiguration(\"yolo_type\").perform(context) + \".cache\",\n                \"num_anchors\": tensorrt_yaml_param['num_anchors'],\n                \"anchors\": tensorrt_yaml_param['anchors'],\n                \"scale_x_y\": tensorrt_yaml_param['scale_x_y'],\n                \"score_threshold\": tensorrt_yaml_param['score_threshold'],\n                \"iou_thresh\": tensorrt_yaml_param['iou_thresh'],\n                \"detections_per_im\": tensorrt_yaml_param['detections_per_im'],\n                \"use_darknet_layer\": tensorrt_yaml_param['use_darknet_layer'],\n                \"ignore_thresh\": tensorrt_yaml_param['ignore_thresh'],\n            }\n        ],\n        remappings=[\n            (\"in/image\", camera_namespace + \"/image_rect\"),\n            (\"out/objects\", output_topic),\n            (\"out/image\", output_topic + \"/debug/image\"),\n        ],\n        extra_arguments=[\n            {\"use_intra_process_comms\": LaunchConfiguration(\"use_intra_process\")}\n        ],\n    ),\n],\n

    ) return [container]

    def generate_launch_description(): launch_arguments = []

    def add_launch_arg(name: str, default_value=None, description=None):\n    # a default_value of None is equivalent to not passing that kwarg at all\n    launch_arguments.append(\n        DeclareLaunchArgument(name, default_value=default_value, description=description)\n    )\nadd_launch_arg(\"mode\",\"\")\nadd_launch_arg(\"input_image\",\"\", description=\"input camera topic\")\nadd_launch_arg(\"camera_container_name\",\"\")\nadd_launch_arg(\"yolo_type\",\"\", description=\"yolo model type\")\nadd_launch_arg(\"label_file\",\"\" ,description=\"tensorrt node label file\")\nadd_launch_arg(\"gpu_id\",\"\", description=\"gpu setting\")\nadd_launch_arg(\"use_intra_process\", \"\", \"use intra process\")\nadd_launch_arg(\"use_multithread\", \"\", \"use multithread\")\n\nset_container_executable = SetLaunchConfiguration(\n    \"container_executable\",\n    \"component_container\",\n    condition=UnlessCondition(LaunchConfiguration(\"use_multithread\")),\n)\n\nset_container_mt_executable = SetLaunchConfiguration(\n    \"container_executable\",\n    \"component_container_mt\",\n    condition=IfCondition(LaunchConfiguration(\"use_multithread\")),\n)\n\nreturn launch.LaunchDescription(\n    launch_arguments\n    + [set_container_executable, set_container_mt_executable]\n    + [OpaqueFunction(function=launch_setup)]\n)\n\n```\n

    The important points for creating camera_node_container.launch.py if you decided to use container for 2D detection pipeline are:

    • Please be careful with design, if you are using multiple cameras, the design must be adaptable for this
    • The tensorrt_yolo node input expects rectified image as input, so if your sensor_driver doesn't support image rectification, you can use image_proc package.
      • You can add something like this in your pipeline for getting rectifying image:
            ...\n        ComposableNode(\n        namespace=camera_ns,\n        package='image_proc',\n        plugin='image_proc::RectifyNode',\n        name='rectify_camera_image_node',\n        # Remap subscribers and publishers\n        remappings=[\n        ('image', camera_ns+\"/image\"),\n        ('camera_info', input_camera_info),\n        ('image_rect', 'image_rect')\n        ],\n        extra_arguments=[\n        {\"use_intra_process_comms\": LaunchConfiguration(\"use_intra_process\")}\n        ],\n        ),\n        ...\n
    • Since lucid_vision_driver supports image_rectification, there is no need to add image_proc for tutorial_vehicle.
    • Please be careful with namespace, for example, we will use /perception/object_detection as tensorrt_yolo node namespace, it will be explained in autoware usage section. For more information, please check image_projection_based_fusion package.

    After the preparing camera_node_container.launch.py to our forked common_sensor_launch package, we need to build the package:

    colcon build --symlink-install --cmake-args -DCMAKE_BUILD_TYPE=Release --packages-up-to common_sensor_launch\n

    Next, we will add camera_node_container.launch.py to camera.launch.xml, we must define necessary tensorrt_yolo parameters like this:

    +  <!--    common parameters -->\n+  <arg name=\"image_0\" default=\"camera_0\" description=\"image raw topic name\"/>\n+  <arg name=\"image_1\" default=\"camera_1\" description=\"image raw topic name\"/>\n  ...\n\n+  <!--    tensorrt params -->\n+  <arg name=\"mode\" default=\"FP32\"/>\n+  <arg name=\"yolo_type\" default=\"yolov3\" description=\"choose image raw number(0-7)\"/>\n+  <arg name=\"label_file\" default=\"coco.names\" description=\"yolo label file\"/>\n+  <arg name=\"gpu_id\" default=\"0\" description=\"choose your gpu id for inference\"/>\n+  <arg name=\"use_intra_process\" default=\"true\"/>\n+  <arg name=\"use_multithread\" default=\"true\"/>\n

    Then, launch camera nodes with these arguments, if you have two or more cameras, you can include it also like this:

    +  <group>\n+    <push-ros-namespace namespace=\"camera\"/>\n+    <include file=\"$(find-pkg-share common_sensor_launch)/launch/camera_node_container.launch.py\">\n+      <arg name=\"mode\" value=\"$(var mode)\"/>\n+      <arg name=\"input_image\" value=\"$(var image_0)\"/>\n+      <arg name=\"camera_container_name\" value=\"front_camera_container\"/>\n+      <arg name=\"yolo_type\" value=\"$(var yolo_type)\"/>\n+      <arg name=\"label_file\" value=\"$(var label_file)\"/>\n+      <arg name=\"gpu_id\" value=\"$(var gpu_id)\"/>\n+      <arg name=\"use_intra_process\" value=\"$(var use_intra_process)\"/>\n+      <arg name=\"use_multithread\" value=\"$(var use_multithread)\"/>\n+      <arg name=\"output_topic\" value=\"camera0/rois0\"/>\n+    </include>\n+    <include file=\"$(find-pkg-share common_sensor_launch)/launch/camera_node_container.launch.py\">\n+      <arg name=\"mode\" value=\"$(var mode)\"/>\n+      <arg name=\"input_image\" value=\"$(var image_1)\"/>\n+      <arg name=\"camera_container_name\" value=\"front_camera_container\"/>\n+      <arg name=\"yolo_type\" value=\"$(var yolo_type)\"/>\n+      <arg name=\"label_file\" value=\"$(var label_file)\"/>\n+      <arg name=\"gpu_id\" value=\"$(var gpu_id)\"/>\n+      <arg name=\"use_intra_process\" value=\"$(var use_intra_process)\"/>\n+      <arg name=\"use_multithread\" value=\"$(var use_multithread)\"/>\n+      <arg name=\"output_topic\" value=\"camera1/rois1\"/>\n+    </include>\n+    ...\n+  </group>\n

    Since there is one camera for tutorial_vehicle, the camera.launch.xml should be like this:

    camera.launch.xml for tutorial_vehicle
    <launch>\n<arg name=\"launch_driver\" default=\"true\"/>\n<!--    common parameters -->\n<arg name=\"image_0\" default=\"camera_0\" description=\"image raw topic name\"/>\n\n<!--    tensorrt params -->\n<arg name=\"mode\" default=\"FP32\"/>\n<arg name=\"yolo_type\" default=\"yolov3\" description=\"choose image raw number(0-7)\"/>\n<arg name=\"label_file\" default=\"coco.names\" description=\"choose image raw number(0-7)\"/>\n<arg name=\"gpu_id\" default=\"0\" description=\"choose image raw number(0-7)\"/>\n<arg name=\"use_intra_process\" default=\"true\"/>\n<arg name=\"use_multithread\" default=\"true\"/>\n\n<group>\n<push-ros-namespace namespace=\"camera\"/>\n<include file=\"$(find-pkg-share common_sensor_launch)/launch/camera_node_container.launch.py\">\n<arg name=\"mode\" value=\"$(var mode)\"/>\n<arg name=\"input_image\" value=\"$(var image_0)\"/>\n<arg name=\"camera_container_name\" value=\"front_camera_container\"/>\n<arg name=\"yolo_type\" value=\"$(var yolo_type)\"/>\n<arg name=\"label_file\" value=\"$(var label_file)\"/>\n<arg name=\"gpu_id\" value=\"$(var gpu_id)\"/>\n<arg name=\"use_intra_process\" value=\"$(var use_intra_process)\"/>\n<arg name=\"use_multithread\" value=\"$(var use_multithread)\"/>\n<arg name=\"output_topic\" value=\"camera0/rois0\"/>\n</include>\n</group>\n</launch>\n

    You can check 2D detection pipeline with launching camera.launch.xml, but we need to build the driver and tensorrt_yolo package first. We will add our sensor driver to sensor_kit_launch's package.xml dependencies.

    + <exec_depend><YOUR-CAMERA-DRIVER-PACKAGE></exec_depend>\n(optionally, if you will launch tensorrt_yolo at here)\n+ <exec_depend>tensorrt_yolo</exec_depend>\n

    Build necessary packages with:

    cd <YOUR-AUTOWARE-DIR>\ncolcon build --symlink-install --cmake-args -DCMAKE_BUILD_TYPE=Release --packages-up-to common_sensor_launch <YOUR-VEHICLE-NAME>_sensor_kit_launch\n

    Then, you can test your camera pipeline:

    ros2 launch <YOUR-SENSOR-KIT-LAUNCH> camera.launch.xml\n# example for tutorial_vehicle: ros2 launch tutorial_vehicle_sensor_kit_launch camera.launch.xml\n

    Then the rois topics will appear, you can check debug image with rviz2 or rqt.

    "},{"location":"how-to-guides/integrating-autoware/creating-vehicle-and-sensor-model/creating-sensor-model/#gnssins-launching","title":"GNSS/INS Launching","text":"

    We will set up the GNSS/INS sensor launches at gnss.launch.xml. The default GNSS sensor options at sample_sensor_kit_launch for u-blox and septentrio is included in gnss.launch.xml, so If we use other sensors as GNSS/INS receiver, we need to add it here. Moreover, gnss_poser package launches here, we will use this package for the pose source of our vehicle at localization initialization but remember, your sensor_driver must provide autoware gnss orientation message for this node. If you are ready with your GNSS/INS driver, you must set navsatfix_topic_name and orientation_topic_name variables at this launch file for gnss_poser arguments. For Example, necessary modifications for should be like this:

      ...\n- <arg name=\"gnss_receiver\" default=\"ublox\" description=\"ublox(default) or septentrio\"/>\n+ <arg name=\"gnss_receiver\" default=\"<YOUR-GNSS-SENSOR>\" description=\"ublox(default), septentrio or <YOUR-GNSS-SENSOR>\"/>\n\n <group>\n    <push-ros-namespace namespace=\"gnss\"/>\n\n    <!-- Switch topic name -->\n    <let name=\"navsatfix_topic_name\" value=\"ublox/nav_sat_fix\" if=\"$(eval &quot;'$(var gnss_receiver)'=='ublox'&quot;)\"/>\n    <let name=\"navsatfix_topic_name\" value=\"septentrio/nav_sat_fix\" if=\"$(eval &quot;'$(var gnss_receiver)'=='septentrio'&quot;)\"/>\n+   <let name=\"navsatfix_topic_name\" value=\"<YOUR-SENSOR>/nav_sat_fix\" if=\"$(eval &quot;'$(var gnss_receiver)'=='<YOUR-GNSS-SENSOR>'&quot;)\"/>\n   <let name=\"orientation_topic_name\" value=\"/autoware_orientation\"/>\n\n    ...\n\n+   <!-- YOUR GNSS Driver -->\n+   <group if=\"$(eval &quot;'$(var launch_driver)' and '$(var gnss_receiver)'=='<YOUR-GNSS-SENSOR>'&quot;)\">\n+     <include file=\"$(find-pkg-share <YOUR-GNSS-SENSOR-DRIVER-PKG>)/launch/<YOUR-GNSS-SENSOR>.launch.xml\"/>\n+   </group>\n   ...\n-   <arg name=\"gnss_frame\" value=\"gnss_link\"/>\n+   <arg name=\"gnss_frame\" value=\"<YOUR-GNSS-SENSOR-FRAME>\"/>\n   ...\n

    Also, you can remove dependencies and unused sensor launch files at gnss.launch.xml. For example, we will use Clap B7 sensor as a GNSS/INS and IMU sensor, and we will use nrtip_client_ros for RTK. Also, we will add these packages to autoware.repos file.

    + sensor_component/external/clap_b7_driver:\n+   type: git\n+   url: https://github.com/Robeff-Technology/clap_b7_driver.git\n+   version: release/autoware\n+ sensor_component/external/ntrip_client_ros :\n+   type: git\n+   url: https://github.com/Robeff-Technology/ntrip_client_ros.git\n+   version: release/humble\n

    So, our gnss.launch.xml for tutorial vehicle should be like this file (Clap B7 includes IMU also, so we will add imu_corrector at this file):

    gnss.launch.xml for tutorial_vehicle
    <launch>\n<arg name=\"launch_driver\" default=\"true\"/>\n\n<group>\n<push-ros-namespace namespace=\"gnss\"/>\n\n<!-- Switch topic name -->\n<let name=\"navsatfix_topic_name\" value=\"/clap/ros/gps_nav_sat_fix\"/>\n<let name=\"orientation_topic_name\" value=\"/clap/autoware_orientation\"/>\n\n<!-- CLAP GNSS Driver -->\n<group if=\"$(eval &quot;'$(var launch_driver)'\">\n<node pkg=\"clap_b7_driver\" exec=\"clap_b7_driver_node\" name=\"clap_b7_driver\" output=\"screen\">\n<param from=\"$(find-pkg-share clap_b7_driver)/config/clap_b7_driver.param.yaml\"/>\n</node>\n<!-- ntrip Client -->\n<include file=\"$(find-pkg-share ntrip_client_ros)/launch/ntrip_client_ros.launch.py\"/>\n</group>\n\n<!-- NavSatFix to MGRS Pose -->\n<include file=\"$(find-pkg-share autoware_gnss_poser)/launch/gnss_poser.launch.xml\">\n<arg name=\"input_topic_fix\" value=\"$(var navsatfix_topic_name)\"/>\n<arg name=\"input_topic_orientation\" value=\"$(var orientation_topic_name)\"/>\n\n<arg name=\"output_topic_gnss_pose\" value=\"pose\"/>\n<arg name=\"output_topic_gnss_pose_cov\" value=\"pose_with_covariance\"/>\n<arg name=\"output_topic_gnss_fixed\" value=\"fixed\"/>\n\n<arg name=\"use_gnss_ins_orientation\" value=\"true\"/>\n<!-- Please enter your gnss frame here -->\n<arg name=\"gnss_frame\" value=\"GNSS_INS/gnss_ins_link\"/>\n</include>\n</group>\n\n<!-- IMU corrector -->\n<group>\n<push-ros-namespace namespace=\"imu\"/>\n<include file=\"$(find-pkg-share autoware_imu_corrector)/launch/imu_corrector.launch.xml\">\n<arg name=\"input_topic\" value=\"/sensing/gnss/clap/ros/imu\"/>\n<arg name=\"output_topic\" value=\"imu_data\"/>\n<arg name=\"param_file\" value=\"$(find-pkg-share individual_params)/config/$(var vehicle_id)/robione_sensor_kit/imu_corrector.param.yaml\"/>\n</include>\n<include file=\"$(find-pkg-share autoware_imu_corrector)/launch/gyro_bias_estimator.launch.xml\">\n<arg name=\"input_imu_raw\" value=\"/sensing/gnss/clap/ros/imu\"/>\n<arg name=\"input_twist\" value=\"/sensing/vehicle_velocity_converter/twist_with_covariance\"/>\n<arg name=\"imu_corrector_param_file\" value=\"$(find-pkg-share individual_params)/config/$(var vehicle_id)/robione_sensor_kit/imu_corrector.param.yaml\"/>\n</include>\n</group>\n</launch>\n
    "},{"location":"how-to-guides/integrating-autoware/creating-vehicle-and-sensor-model/creating-sensor-model/#imu-launching","title":"IMU Launching","text":"

    You can add your IMU sensor launch file at imu.launch.xml file. At the sample_sensor_kit, there is Tamagawa IMU sensor used as a IMU sensor. You can add your IMU driver instead of the Tamagawa IMU driver. Also, we will launch gyro_bias_estimator and imu_corrector at imu.launch.xml file. Please refer these documentations for more information (We added imu_corrector and gyro_bias_estimator at gnss.launch.xml at tutorial_vehicle, so we will not create and use imu.launch.xml for tutorial_vehicle). Please don't forget changing imu_raw_name argument for describing the raw imu topic.

    Here is a sample imu.launch.xml launch file for autoware:

    <launch>\n  <arg name=\"launch_driver\" default=\"true\"/>\n\n  <group>\n    <push-ros-namespace namespace=\"imu\"/>\n\n-     <group>\n-       <push-ros-namespace namespace=\"tamagawa\"/>\n-       <node pkg=\"tamagawa_imu_driver\" name=\"tag_serial_driver\" exec=\"tag_serial_driver\" if=\"$(var launch_driver)\">\n-         <remap from=\"imu/data_raw\" to=\"imu_raw\"/>\n-         <param name=\"port\" value=\"/dev/imu\"/>\n-         <param name=\"imu_frame_id\" value=\"tamagawa/imu_link\"/>\n-       </node>\n-     </group>\n\n+     <group>\n+       <push-ros-namespace namespace=\"<YOUR-IMU_MODEL>\"/>\n+       <node pkg=\"<YOUR-IMU-DRIVER-PACKAGE>\" name=\"<YOUR-IMU-DRIVER>\" exec=\"<YOUR-IMU-DRIVER-EXECUTIBLE>\" if=\"$(var launch_driver)\">\n+       <!-- Add necessary params here -->\n+       </node>\n+     </group>\n\n-   <arg name=\"imu_raw_name\" default=\"tamagawa/imu_raw\"/>\n+   <arg name=\"imu_raw_name\" default=\"<YOUR-IMU_MODEL/YOUR-RAW-IMU-TOPIC>\"/>\n   <arg name=\"imu_corrector_param_file\" default=\"$(find-pkg-share individual_params)/config/$(var vehicle_id)/sample_sensor_kit/imu_corrector.param.yaml\"/>\n    <include file=\"$(find-pkg-share autoware_imu_corrector)/launch/imu_corrector.launch.xml\">\n      <arg name=\"input_topic\" value=\"$(var imu_raw_name)\"/>\n      <arg name=\"output_topic\" value=\"imu_data\"/>\n      <arg name=\"param_file\" value=\"$(var imu_corrector_param_file)\"/>\n    </include>\n\n    <include file=\"$(find-pkg-share autoware_imu_corrector)/launch/gyro_bias_estimator.launch.xml\">\n      <arg name=\"input_imu_raw\" value=\"$(var imu_raw_name)\"/>\n      <arg name=\"input_twist\" value=\"/sensing/vehicle_velocity_converter/twist_with_covariance\"/>\n      <arg name=\"imu_corrector_param_file\" value=\"$(var imu_corrector_param_file)\"/>\n    </include>\n  </group>\n</launch>\n

    Please make necessary modifications on this file according to your IMU driver. Since there is no dedicated IMU sensor on tutorial_vehicle, we will remove their launch in sensing.launch.xml.

    -   <!-- IMU Driver -->\n-   <include file=\"$(find-pkg-share tutorial_vehicle_sensor_kit_launch)/launch/imu.launch.xml\">\n-     <arg name=\"launch_driver\" value=\"$(var launch_driver)\"/>\n-   </include>\n

    You can add or remove launch files in sensing.launch.xml according to your sensor architecture.

    "},{"location":"how-to-guides/integrating-autoware/creating-vehicle-and-sensor-model/creating-vehicle-model/","title":"Creating a vehicle model for Autoware","text":""},{"location":"how-to-guides/integrating-autoware/creating-vehicle-and-sensor-model/creating-vehicle-model/#creating-a-vehicle-model-for-autoware","title":"Creating a vehicle model for Autoware","text":""},{"location":"how-to-guides/integrating-autoware/creating-vehicle-and-sensor-model/creating-vehicle-model/#introduction","title":"Introduction","text":"

    This page introduces the following packages for the vehicle model:

    1. <YOUR-VEHICLE-NAME>_vehicle_description
    2. <YOUR-VEHICLE-NAME>_vehicle_launch

    Previously, we forked our vehicle model at the creating autoware repositories page step. For instance, we created tutorial_vehicle_launch as an implementation example for the said step. Please ensure that the _vehicle_launch repository is included in Autoware, following the directory structure below:

    <YOUR-OWN-AUTOWARE-DIR>/\n  \u2514\u2500 src/\n       \u2514\u2500 vehicle/\n            \u2514\u2500 <YOUR-VEHICLE-NAME>_vehicle_launch/\n                 \u251c\u2500 <YOUR-VEHICLE-NAME>_vehicle_description/\n                 \u2514\u2500 <YOUR-VEHICLE-NAME>_vehicle_launch/\n

    If your forked Autoware meta-repository doesn't include <YOUR-VEHICLE-NAME>_vehicle_launch with the correct folder structure as shown above, please add your forked <YOUR-VEHICLE-NAME>_vehicle_launch repository to the autoware.repos file and run the vcs import src < autoware.repos command in your terminal to import the newly included repositories at autoware.repos file.

    Now, we are ready to modify the following vehicle model packages for our vehicle. Firstly, we need to rename the description and launch packages:

    <YOUR-VEHICLE-NAME>_vehicle_launch/\n- \u251c\u2500 sample_vehicle_description/\n+ \u251c\u2500 <YOUR-VEHICLE-NAME>_vehicle_description/\n- \u2514\u2500 sample_vehicle_launch/\n+ \u2514\u2500 <YOUR-VEHICLE-NAME>_vehicle_launch/\n

    After that, we will change our package names in the package.xml file and CMakeLists.txt file of the sample_vehicle_description and sample_vehicle_launch packages. So, open the package.xml file and CMakeLists.txt file with any text editor or IDE of your preference and perform the following changes:

    Change the <name> attribute at package.xml file:

    <package format=\"3\">\n- <name>sample_vehicle_description</name>\n+ <name><YOUR-VEHICLE-NAME>_vehicle_description</name>\n <version>0.1.0</version>\n  <description>The vehicle_description package</description>\n  ...\n  ...\n

    Change the project() method at CmakeList.txt file.

      cmake_minimum_required(VERSION 3.5)\n- project(sample_vehicle_description)\n+ project(<YOUR-VEHICLE-NAME>_vehicle_description)\n\n find_package(ament_cmake_auto REQUIRED)\n...\n...\n

    Remember to apply the name changes and project method for BOTH <YOUR-VEHICLE-NAME>_vehicle_descriptionand <YOUR-VEHICLE-NAME>_vehicle_launch ROS 2 packages. Once finished, we can proceed to build said packages:

    colcon build --symlink-install --cmake-args -DCMAKE_BUILD_TYPE=Release --packages-up-to <YOUR-VEHICLE-NAME>_vehicle_description <YOUR-VEHICLE-NAME>_vehicle_launch\n
    "},{"location":"how-to-guides/integrating-autoware/creating-vehicle-and-sensor-model/creating-vehicle-model/#vehicle-description","title":"Vehicle description","text":"

    The main purpose of this package is to describe the vehicle dimensions, 3D model of the vehicle, mirror_dimensions of the vehicle, simulator model parameters and URDF of the vehicle.

    The folder structure of vehicle_description package is:

    <YOUR-VEHICLE-NAME>_vehicle_description/\n   \u251c\u2500 config/\n   \u2502     \u251c\u2500 mirror.param.yaml\n   \u2502     \u251c\u2500 simulator_model.param.yaml\n   \u2502     \u2514\u2500 vehicle_info.param.yaml\n   \u251c\u2500 mesh/\n   \u2502     \u251c\u2500 <YOUR-VEHICLE-MESH-FILE>.dae (or .fbx)\n   \u2502     \u251c\u2500 ...\n   \u2514\u2500 urdf/\n         \u2514\u2500 vehicle.xacro\n

    Now, we will modify these files according to our vehicle design.

    "},{"location":"how-to-guides/integrating-autoware/creating-vehicle-and-sensor-model/creating-vehicle-model/#mirrorparamyaml","title":"mirror.param.yaml","text":"

    This file describes your vehicle mirror dimension for CropBox filter of PointCloudPreprocessor. This is important for cropping mirrors from your lidar's point cloud.

    The mirror.param.yaml consist of the following parameters:

    /**:\nros__parameters:\nmin_longitudinal_offset: 0.0\nmax_longitudinal_offset: 0.0\nmin_lateral_offset: 0.0\nmax_lateral_offset: 0.0\nmin_height_offset: 0.0\nmax_height_offset: 0.0\n

    The mirror param file should be filled with this dimension information, please be careful with min_lateral_offsetparameter, it could be negative value like the mirror dimension figure below.

    Dimension demonstration for mirror.param.yaml

    Warning

    Since there is no mirror in tutorial_vehicle, all values set to 0.0. If your vehicle does not have mirror, you can set these values 0.0 as well.

    "},{"location":"how-to-guides/integrating-autoware/creating-vehicle-and-sensor-model/creating-vehicle-model/#simulator_modelparamyaml","title":"simulator_model.param.yaml","text":"

    This file is a configuration file for the simulator environment. Please update these parameters according to your vehicle specifications. For detailed information about variables, please check the simple_planning_simulator package. The file consists of these parameters:

    /**:\nros__parameters:\nsimulated_frame_id: \"base_link\" # center of the rear axle.\norigin_frame_id: \"map\"\nvehicle_model_type: \"DELAY_STEER_ACC_GEARED\" # options: IDEAL_STEER_VEL / IDEAL_STEER_ACC / IDEAL_STEER_ACC_GEARED / DELAY_STEER_ACC / DELAY_STEER_ACC_GEARED\ninitialize_source: \"INITIAL_POSE_TOPIC\" #  options: ORIGIN / INITIAL_POSE_TOPIC\ntimer_sampling_time_ms: 25\nadd_measurement_noise: False # the Gaussian noise is added to the simulated results\nvel_lim: 50.0 # limit of velocity\nvel_rate_lim: 7.0 # limit of acceleration\nsteer_lim: 1.0 # limit of steering angle\nsteer_rate_lim: 5.0 # limit of steering angle change rate\nacc_time_delay: 0.1 # dead time for the acceleration input\nacc_time_constant: 0.1 # time constant of the 1st-order acceleration dynamics\nsteer_time_delay: 0.24 # dead time for the steering input\nsteer_time_constant: 0.27 # time constant of the 1st-order steering dynamics\nx_stddev: 0.0001 # x standard deviation for dummy covariance in map coordinate\ny_stddev: 0.0001 # y standard deviation for dummy covariance in map coordinate\n
    "},{"location":"how-to-guides/integrating-autoware/creating-vehicle-and-sensor-model/creating-vehicle-model/#vehicle_infoparamyaml","title":"vehicle_info.param.yaml","text":"

    This file stores the vehicle dimensions for Autoware modules. Please update it with your vehicle information. You can refer to the vehicle dimensions page for detailed dimension demonstration. Here is the vehicle_info.param.yaml for sample_vehicle:

    /**:\nros__parameters:\nwheel_radius: 0.383 # The radius of the wheel, primarily used for dead reckoning.\nwheel_width: 0.235 # The lateral width of a wheel tire, primarily used for dead reckoning.\nwheel_base: 2.79 # between front wheel center and rear wheel center\nwheel_tread: 1.64 # between left wheel center and right wheel center\nfront_overhang: 1.0 # between front wheel center and vehicle front\nrear_overhang: 1.1 # between rear wheel center and vehicle rear\nleft_overhang: 0.128 # between left wheel center and vehicle left\nright_overhang: 0.128 # between right wheel center and vehicle right\nvehicle_height: 2.5\nmax_steer_angle: 0.70 # [rad]\n

    Please update vehicle_info.param.yaml with your vehicle information.

    "},{"location":"how-to-guides/integrating-autoware/creating-vehicle-and-sensor-model/creating-vehicle-model/#3d-model-of-vehicle","title":"3D model of vehicle","text":"

    You can use .fbx or .dae format as a 3D model with autoware. For the tutorial_vehicle, we exported our 3D model as a .fbx file in the tutorial_vehicle_launch repository. We will set the .fbx file path at vehicle.xacro file.

    "},{"location":"how-to-guides/integrating-autoware/creating-vehicle-and-sensor-model/creating-vehicle-model/#vehiclexacro","title":"vehicle.xacro","text":"

    This .xacro file links the base_link of the vehicle to the 3D mesh. Therefore, we need to make some modifications in this file.

    <?xml version=\"1.0\"?>\n<robot xmlns:xacro=\"http://ros.org/wiki/xacro\">\n  <!-- load parameter -->\n- <xacro:property name=\"vehicle_info\" value=\"${xacro.load_yaml('$(find sample_vehicle_description)/config/vehicle_info.param.yaml')}\"/>\n+ <xacro:property name=\"vehicle_info\" value=\"${xacro.load_yaml('$(find <YOUR-VEHICLE-NAME>_vehicle_description)/config/vehicle_info.param.yaml')}\"/>\n\n <!-- vehicle body -->\n  <link name=\"base_link\">\n    <visual>\n      <origin xyz=\"${vehicle_info['/**']['ros__parameters']['wheel_base']/2.0} 0 0\" rpy=\"${pi/2.0} 0 ${pi}\"/>\n      <geometry>\n-       <mesh filename=\"package://sample_vehicle_description/mesh/lexus.dae\" scale=\"1 1 1\"/>\n+       <mesh filename=\"package://<YOUR-VEHICLE-NAME>_vehicle_description/mesh/<YOUR-3D-MESH-FILE>\" scale=\"1 1 1\"/>\n     </geometry>\n    </visual>\n  </link>\n</robot>\n

    You can also modify roll, pitch, yaw, x, y, z and scale values for the correct position and orientation of the vehicle.

    Please build vehicle_description package after the completion of your _vehicle_description package.

    cd <YOUR-AUTOWARE-DIR>\ncolcon build --symlink-install --cmake-args -DCMAKE_BUILD_TYPE=Release --packages-up-to <YOUR-VEHICLE-NAME>_vehicle_description <YOUR-VEHICLE-NAME>_vehicle_launch\n
    "},{"location":"how-to-guides/integrating-autoware/creating-vehicle-and-sensor-model/creating-vehicle-model/#launching-vehicle-interface","title":"Launching vehicle interface","text":"

    If your vehicle interface is ready, then you can add your vehicle_interface launch file in vehicle_interface.launch.xml. Please check the creating vehicle interface page for more info.

    "},{"location":"how-to-guides/integrating-autoware/creating-vehicle-and-sensor-model/creating-vehicle-model/#launch-planning-simulator-with-your-own-vehicle","title":"Launch planning simulator with your own vehicle","text":"

    After completing the sensor_model, individual_parameters and vehicle model of your vehicle, you are ready to launch the planning simulator with your own vehicle. If you are not sure if every custom package in your Autoware project folder is built, please build all packages:

    cd <YOUR-AUTOWARE-DIR>\ncolcon build --symlink-install --cmake-args -DCMAKE_BUILD_TYPE=Release\n

    To launch the planning simulator, source the install/setup.bash file in your Autoware project folder and run this command in your terminal:

    ros2 launch autoware_launch planning_simulator.launch.xml map_path:=$HOME/Files/autoware_map/sample-map-planning/ vehicle_model:=<YOUR-VEHICLE-MODEL> sensor_model:=<YOUR-SENSOR-KIT> vehicle_id:=<YOUR-VEHICLE-ID>\n

    For example, if we try planning simulator with the tutorial_vehicle:

    ros2 launch autoware_launch planning_simulator.launch.xml map_path:=$HOME/Files/autoware_map/sample-map-planning/ vehicle_model:=tutorial_vehicle sensor_model:=tutorial_vehicle_sensor_kit vehicle_id:=tutorial_vehicle\n

    The planning simulator will open, and you can give an initial pose to your vehicle using 2D Pose Estimate button or by pressing the P key on your keyboard. You can click everywhere for vehicle initialization.

    Our tutorial_vehicle on rviz with TF data"},{"location":"how-to-guides/integrating-autoware/creating-vehicle-interface-package/ackermann-kinematic-model/","title":"Ackermann kinematic model","text":""},{"location":"how-to-guides/integrating-autoware/creating-vehicle-interface-package/ackermann-kinematic-model/#ackermann-kinematic-model","title":"Ackermann kinematic model","text":"

    Autoware now supports control inputs for vehicles based on an Ackermann kinematic model. This section introduces you a brief concept of the Ackermann kinematic model and explains how Autoware controls it. Please remember, Autoware control output (/control/command/control_cmd) publishes lateral and longitudinal commands according to the Ackermann kinematic model.

    • If your vehicle does not suit the Ackermann kinematic model, you have to modify the control commands. Another document gives you an example how to convert your Ackermann kinematic model control inputs into a differential drive model.
    "},{"location":"how-to-guides/integrating-autoware/creating-vehicle-interface-package/ackermann-kinematic-model/#geometry","title":"Geometry","text":"

    The basic style of the Ackermann kinematic model has four wheels with an Ackermann link on the front, and it is powered by the rear wheels. The key point of Ackermann kinematic model is that the axes of all wheels intersect at the same point, which means all wheels will trace a circular trajectory with a different radii but a common center point (See the figure below). Therefore, this model has a great advantage that it minimizes the slippage of the wheels and prevents tires from getting worn soon.

    In general, the Ackermann kinematic model accepts the longitudinal speed \\(v\\) and the steering angle \\(\\phi\\) as inputs. In autoware, \\(\\phi\\) is positive if it is steered counterclockwise, so the steering angle in the figure below is actually negative.

    The basic style of an Ackermann kinematic model. The left figure shows a vehicle facing straight forward, while the right figure shows a vehicle steering to the right."},{"location":"how-to-guides/integrating-autoware/creating-vehicle-interface-package/ackermann-kinematic-model/#control","title":"Control","text":"

    Autoware publishes a ROS 2 topic named control_cmd from several types of publishers. A control_cmd topic is a AckermannControlCommand type message that contains

    AckermannControlCommand
      builtin_interfaces/Time stamp\n  autoware_auto_control_msgs/AckermannLateralCommand lateral\n  autoware_auto_control_msgs/LongitudinalCommand longitudinal\n

    where,

    AckermannLateralCommand
      builtin_interfaces/Time stamp\n  float32 steering_tire_angle\n  float32 steering_tire_rotation_rate\n
    LongitudinalCommand
      builtin_interfaces/Time stamp\n  float32 speed\n  float32 accelaration\n  float32 jerk\n

    See the AckermannLateralCommand.idl and LongitudinalCommand.idl for details.

    The vehicle interface should realize these control commands through your vehicle's control device.

    Moreover, Autoware also provides brake commands, light commands, and more (see vehicle interface design), so the vehicle interface module should be applicable to these commands as long as there are devices available to handle them.

    "},{"location":"how-to-guides/integrating-autoware/creating-vehicle-interface-package/creating-vehicle-interface/","title":"Creating vehicle interface","text":""},{"location":"how-to-guides/integrating-autoware/creating-vehicle-interface-package/creating-vehicle-interface/#creating-vehicle-interface","title":"Creating vehicle interface","text":""},{"location":"how-to-guides/integrating-autoware/creating-vehicle-interface-package/creating-vehicle-interface/#how-to-implement-a-vehicle-interface","title":"How to implement a vehicle interface","text":"

    The following instructions describe how to create a vehicle interface.

    "},{"location":"how-to-guides/integrating-autoware/creating-vehicle-interface-package/creating-vehicle-interface/#1-create-a-directory-for-vehicle-interface","title":"1. Create a directory for vehicle interface","text":"

    It is recommended to create your vehicle interface at <your-autoware-dir>/src/vehicle/external

    cd <your-autoware-dir>/src/vehicle/external\n
    "},{"location":"how-to-guides/integrating-autoware/creating-vehicle-interface-package/creating-vehicle-interface/#2-install-or-implement-your-own-vehicle-interface","title":"2. Install or implement your own vehicle interface","text":"

    If there is an already complete vehicle interface package (like pacmod_interface), you can install it to your environment. If not, you have to implement your own vehicle interface by yourself. Let's create a new package by ros2 pkg create. The following example will show you how to create a vehicle interface package named my_vehicle_interface.

    ros2 pkg create --build-type ament_cmake my_vehicle_interface\n

    Then, you should write your implementation of vehicle interface in my_vehicle_interface/src. Again, since this implementation is so specific to the control device of your vehicle, it is beyond the scope of this document to describe how to implement your vehicle interface in detail. Here are some factors that might be considered:

    • Some necessary topic subscription of control commands topics from Autoware to control your vehicle:
    Topic Name Topic Type Description /control/command/control_cmd autoware_auto_control_msgs/msg/AckermannControlCommand This topic includes main topics for controlling our vehicle like a steering tire angle, speed, acceleration, etc. /control/command/gear_cmd autoware_auto_vehicle_msgs/msg/GearCommand This topic includes gear command for autonomous driving, please check message values to make sense of gears values. Please check the message definition of this type. /control/current_gate_mode tier4_control_msgs/msg/GateMode This topic describes control on the autoware or not. Please check GateMode message type for detailed information. /control/command/emergency_cmd tier4_vehicle_msgs/msg/VehicleEmergencyStamped This topic sends emergency when autoware is on emergency state. Please check VehicleEmergencyStamped message type for detailed information. /control/command/turn_indicators_cmd autoware_auto_vehicle_msgs/msg/TurnIndicatorsCommand This topic indicates a turn signal for your own vehicle. Please check TurnIndicatorsCommand message type for detailed information. /control/command/hazard_lights_cmd autoware_auto_vehicle_msgs/msg/HazardLightsCommand This topic sends command for hazard lights. Please check HazardLightsCommand /control/command/actuation_cmd tier4_vehicle_msgs/msg/ActuationCommandStamped This topic is enabled when you use raw_vehicle_command_converter for control your vehicle with TYPE B which we mentioned at Vehicle interface section. In summary, if you are using Type B on your vehicle, this topic appeared and included with gas, brake, steering-wheel actuation commands. Please check ActuationCommandStamped message type for detailed information. etc. etc. etc.
    • Some necessary topic publication of vehicle status topics from vehicle interface to Autoware:
    Topic Name Topic Type Description /vehicle/status/battery_charge tier4_vehicle_msgs/msg/BatteryStatus This topic includes battery information. Please check BatteryStatus message type for detailed information. You can use this value as describing fuel level, etc. /vehicle/status/control_mode autoware_auto_vehicle_msgs/msg/ControlModeReport This topic describes the current control mode of vehicle. Please check ControlModeReport message type for detailed information. /vehicle/status/gear_status autoware_auto_vehicle_msgs/msg/GearReport This topic includes the current gear status of the vehicle. Please check GearReport message type for detailed information. /vehicle/status/hazard_lights_status autoware_auto_vehicle_msgs/msg/HazardLightsReport This topic describes hazard light status of the vehicle. Please check HazardLightsReport message type for detailed information. /vehicle/status/turn_indicators_status autoware_auto_vehicle_msgs/msg/TurnIndicatorsReport This topic reports the steering status of the vehicle. Please check SteeringReport message type for detailed information. /vehicle/status/steering_status autoware_auto_vehicle_msgs/msg/SteeringReport This topic reports the steering status of the vehicle. Please check SteeringReport message type for detailed information. /vehicle/status/velocity_Status autoware_auto_vehicle_msgs/msg/VelocityReport This topic gives us the velocity status of the vehicle. Please check VelocityReport message type for detailed information. etc. etc. etc.

    This diagram as an example for communication of vehicle interface and autoware with describing sample topics and message types.

    Sample demonstration of vehicle and autoware communication. There are some topics and types included in this diagram and it can be changed your desired control command or autoware updates.

    You must create a subscriber and publisher with these topics on your vehicle interface. Let's explain with the simple demonstration of subscribing /control/command/control_cmd and publishing /vehicle/status/gear_status topics.

    So, your YOUR-OWN-VEHICLE-INTERFACE.hpp header file should be like this:

    ...\n#include <autoware_auto_control_msgs/msg/ackermann_control_command.hpp>\n#include <autoware_auto_vehicle_msgs/msg/gear_report.hpp>\n...\n\nclass <YOUR-OWN-INTERFACE> : public rclcpp::Node\n{\npublic:\n...\nprivate:\n...\n// from autoware\nrclcpp::Subscription<autoware_auto_control_msgs::msg::AckermannControlCommand>::SharedPtr\ncontrol_cmd_sub_;\n...\n// from vehicle\nrclcpp::Publisher<autoware_auto_vehicle_msgs::msg::GearReport>::SharedPtr gear_status_pub_;\n...\n// autoware command messages\n...\nautoware_auto_control_msgs::msg::AckermannControlCommand::ConstSharedPtr control_cmd_ptr_;\n...\n// callbacks\n...\nvoid callback_control_cmd(\nconst autoware_auto_control_msgs::msg::AckermannControlCommand::ConstSharedPtr msg);\n...\nvoid to_vehicle();\nvoid from_vehicle();\n}\n

    And your YOUR-OWN-VEHICLE-INTERFACE.cpp .cpp file should be like this:

    #include <YOUR-OWN-VEHICLE-INTERFACE>/<YOUR-OWN-VEHICLE-INTERFACE>.hpp>\n...\n\n<YOUR-OWN-VEHICLE-INTERFACE>::<YOUR-OWN-VEHICLE-INTERFACE>()\n: Node(\"<YOUR-OWN-VEHICLE-INTERFACE>\")\n{\n...\n/* subscribers */\nusing std::placeholders::_1;\n// from autoware\ncontrol_cmd_sub_ = create_subscription<autoware_auto_control_msgs::msg::AckermannControlCommand>(\n\"/control/command/control_cmd\", 1, std::bind(&<YOUR-OWN-VEHICLE-INTERFACE>::callback_control_cmd, this, _1));\n...\n// to autoware\ngear_status_pub_ = create_publisher<autoware_auto_vehicle_msgs::msg::GearReport>(\n\"/vehicle/status/gear_status\", rclcpp::QoS{1});\n...\n}\n\nvoid <YOUR-OWN-VEHICLE-INTERFACE>::callback_control_cmd(\nconst autoware_auto_control_msgs::msg::AckermannControlCommand::ConstSharedPtr msg)\n{\ncontrol_cmd_ptr_ = msg;\n}\n\nvoid <YOUR-OWN-VEHICLE-INTERFACE>::to_vehicle()\n{\n...\n// you should implement this structure according to your own vehicle design\ncontrol_command_to_vehicle(control_cmd_ptr_);\n...\n}\n\nvoid <YOUR-OWN-VEHICLE-INTERFACE>::to_autoware()\n{\n...\n// you should implement this structure according to your own vehicle design\nautoware_auto_vehicle_msgs::msg::GearReport gear_report_msg;\nconvert_gear_status_to_autoware_msg(gear_report_msg);\ngear_status_pub_->publish(gear_report_msg);\n...\n}\n
    • Modification of control values if needed
      • In some cases, you may need to modify the control commands. For example, Autoware expects vehicle velocity information in m/s units, but if your vehicle publishes it in a different format (i.e., km/h), you must convert it before sending it to Autoware.
    "},{"location":"how-to-guides/integrating-autoware/creating-vehicle-interface-package/creating-vehicle-interface/#3-prepare-a-launch-file","title":"3. Prepare a launch file","text":"

    After you implement your vehicle interface, or you want to debug it by launching it, create a launch file of your vehicle interface, and include it to vehicle_interface.launch.xml which included in <VEHICLE_ID>_vehicle_launch package that we forked and created at creating vehicle and sensor model page.

    Do not get confused. First, you need to create a launch file for your own vehicle interface module (like my_vehicle_interface.launch.xml) and then include that to vehicle_interface.launch.xml which exists in another directory. Here are the details.

    1. Add a launch directory in the my_vehicle_interface directory, and create a launch file of your own vehicle interface in it. Take a look at Creating a launch file in the ROS 2 documentation.

    2. Include your launch file which is created for vehicle_interface to <YOUR-VEHICLE-NAME>_launch/<YOUR-VEHICLE-NAME>_launch/launch/vehicle_interface.launch.xml by opening it and add the included terms like below.

    vehicle_interface.launch.xml
    <?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<launch>\n<arg name=\"vehicle_id\" default=\"$(env VEHICLE_ID default)\"/>\n<!-- please add your created vehicle interface launch file -->\n<include file=\"$(find-pkg-share my_vehicle_interface)/launch/my_vehicle_interface.launch.xml\">\n</include>\n</launch>\n

    Finally, your directory structure may look like below. Most of the files are omitted for clarity, but the files shown here needs modification as said in the previous and current process.

    <your-autoware-dir>/\n\u2514\u2500 src/\n    \u2514\u2500 vehicle/\n        \u251c\u2500 external/\n+       \u2502   \u2514\u2500 <YOUR-VEHICLE-NAME>_interface/\n+       \u2502       \u251c\u2500 src/\n+       \u2502       \u2514\u2500 launch/\n+       \u2502            \u2514\u2500 my_vehicle_interface.launch.xml\n+       \u2514\u2500 <YOUR-VEHICLE-NAME>_launch/ (COPIED FROM sample_vehicle_launch)\n+           \u251c\u2500 <YOUR-VEHICLE-NAME>_launch/\n+           \u2502  \u251c\u2500 launch/\n+           \u2502  \u2502  \u2514\u2500 vehicle_interface.launch.xml\n+           \u2502  \u251c\u2500 CMakeLists.txt\n+           \u2502  \u2514\u2500 package.xml\n+           \u251c\u2500 <YOUR-VEHICLE-NAME>_description/\n+           \u2502  \u251c\u2500 config/\n+           \u2502  \u251c\u2500 mesh/\n+           \u2502  \u251c\u2500 urdf/\n+           \u2502  \u2502  \u2514\u2500 vehicle.xacro\n+           \u2502  \u251c\u2500 CMakeLists.txt\n+           \u2502  \u2514\u2500 package.xml\n+           \u2514\u2500 README.md\n
    "},{"location":"how-to-guides/integrating-autoware/creating-vehicle-interface-package/creating-vehicle-interface/#4-build-the-vehicle-interface-package-and-the-launch-package","title":"4. Build the vehicle interface package and the launch package","text":"

    Build three packages my_vehicle_interface, <YOUR-VEHICLE-NAME>_launch and <YOUR-VEHICLE-NAME>_description by colcon build, or you can just build the entire Autoware if you have done other things.

    colcon build --symlink-install --cmake-args -DCMAKE_BUILD_TYPE=Release --packages-select my_vehicle_interface <YOUR-VEHICLE-NAME>_launch <YOUR-VEHICLE-NAME>_description\n
    "},{"location":"how-to-guides/integrating-autoware/creating-vehicle-interface-package/creating-vehicle-interface/#5-when-you-launch-autoware","title":"5. When you launch Autoware","text":"

    Finally, you are done implementing your vehicle interface module! Be careful that you need to launch Autoware with the proper vehicle_model option like the example below. This example is launching planning simulator.

    ros2 launch autoware_launch planning.launch.xml map_path:=$HOME/autoware_map/sample-map-planning vehicle_model:=<YOUR-VEHICLE-NAME> sensor_model:=<YOUR-VEHICLE-NAME>_sensor_kit\n
    "},{"location":"how-to-guides/integrating-autoware/creating-vehicle-interface-package/creating-vehicle-interface/#tips","title":"Tips","text":"

    There are some tips that may help you.

    • You can subdivide your vehicle interface into smaller packages if you want. Then your directory structure may look like below (not the only way though). Do not forget to launch all packages in my_vehicle_interface.launch.xml.

      <your-autoware-dir>/\n\u2514\u2500 src/\n    \u2514\u2500 vehicle/\n        \u251c\u2500 external/\n        \u2502   \u2514\u2500 my_vehicle_interface/\n        \u2502       \u251c\u2500 src/\n        \u2502       \u2502   \u251c\u2500 package1/\n        \u2502       \u2502   \u251c\u2500 package2/\n        \u2502       \u2502   \u2514\u2500 package3/\n        \u2502       \u2514\u2500 launch/\n        \u2502            \u2514\u2500 my_vehicle_interface.launch.xml\n        \u251c\u2500 sample_vehicle_launch/\n        \u2514\u2500 my_vehicle_name_launch/\n
    • If you are using a vehicle interface and launch package from a open git repository, or created your own as a git repository, it is highly recommended to add those repositories to your autoware.repos file which is located to directly under your autoware folder like the example below. You can specify the branch or commit hash by the version tag.

      autoware.repos
      # vehicle (this section should be somewhere in autoware.repos and add the below)\nvehicle/external/my_vehicle_interface:\ntype: git\nurl: https://github.com/<repository-name-B>/my_vehicle_interface.git\nversion: main\n

      Then you can import your entire environment easily to another local device by using the vcs import command. (See the source installation guide)

    "},{"location":"how-to-guides/integrating-autoware/creating-vehicle-interface-package/customizing-for-differential-drive-model/","title":"Customizing for differential drive vehicle","text":""},{"location":"how-to-guides/integrating-autoware/creating-vehicle-interface-package/customizing-for-differential-drive-model/#customizing-for-differential-drive-vehicle","title":"Customizing for differential drive vehicle","text":""},{"location":"how-to-guides/integrating-autoware/creating-vehicle-interface-package/customizing-for-differential-drive-model/#1-introduction","title":"1. Introduction","text":"

    Currently, Autoware assumes that vehicles use an Ackermann kinematic model with Ackermann steering. Thus, Autoware adopts the Ackermann command format for the Control module's output (see the AckermannDrive ROS message definition for an overview of Ackermann commands, and the AckermannControlCommands struct used in Autoware for more details).

    However, it is possible to integrate Autoware with a vehicle that follows a differential drive kinematic model, as commonly used by small mobile robots. The differential vehicles can be either four-wheel or two-wheel, as described in the figure below.

    Sample differential vehicles with four-wheel and two-wheel models."},{"location":"how-to-guides/integrating-autoware/creating-vehicle-interface-package/customizing-for-differential-drive-model/#2-procedure","title":"2. Procedure","text":"

    One simple way of using Autoware with a differential drive vehicle is to create a vehicle_interface package that translates Ackermann commands to differential drive commands. Here are two points that you need to consider:

    • Create vehicle_interface package for differential drive vehicle
    • Set an appropriate wheel_base
    "},{"location":"how-to-guides/integrating-autoware/creating-vehicle-interface-package/customizing-for-differential-drive-model/#21-create-a-vehicle_interface-package-for-differential-drive-vehicle","title":"2.1 Create a vehicle_interface package for differential drive vehicle","text":"

    An Ackermann command in Autoware consists of two main control inputs:

    • steering angle (\\(\\omega\\))
    • velocity (\\(v\\))

    Conversely, a typical differential drive command consists of the following inputs:

    • left wheel velocity (\\(v_l\\))
    • right wheel velocity (\\(v_r\\))

    So, one way in which an Ackermann command can be converted to a differential drive command is by using the following equations:

    \\[ v_l = v - \\frac{l\\omega}{2}, v_r = v + \\frac{l\\omega}{2} \\]

    where \\(l\\) denotes wheel tread.

    Here is the example .cpp snippet for converting ackermann model kinematics to a differential model:

    ...\nvoid convert_ackermann_to_differential(\nautoware_auto_control_msgs::msg::AckermannControlCommand & ackermann_msg\nmy_vehicle_msgs::msg::DifferentialCommand & differential_command)\n{\ndifferential_command.left_wheel.velocity =\nackermann_msg.longitudinal.speed - (ackermann_msg.lateral.steering_tire_angle * my_wheel_tread) / 2;\ndifferential_command.right_wheel.velocity =\nackermann_msg.longitudinal.speed + (ackermann_msg.lateral.steering_tire_angle * my_wheel_tread) / 2;\n}\n...\n

    For information about other factors that need to be considered when creating a vehicle_interface package, refer to the creating vehicle_interface page.

    "},{"location":"how-to-guides/integrating-autoware/creating-vehicle-interface-package/customizing-for-differential-drive-model/#22-set-an-appropriate-wheel_base","title":"2.2 Set an appropriate wheel_base","text":"

    A differential drive robot does not necessarily have front and rear wheels, which means that the wheelbase (the horizontal distance between the axles of the front and rear wheels) cannot be defined. However, Autoware expects wheel_base to be set in vehicle_info.param.yaml with some value. Thus, you need to set a pseudo value for wheel_base.

    The appropriate pseudo value for wheel_base depends on the size of your vehicle. Setting it to be the same value as wheel_tread is one possible choice.

    Warning

    • If the wheel_base value is set too small then the vehicle may behave unexpectedly. For example, the vehicle may drive beyond the bounds of a calculated path.
    • Conversely, if wheel_base is set too large, the vehicle's range of motion will be restricted. The reason being that Autoware's Planning module will calculate an overly conservative trajectory based on the assumed vehicle length.
    "},{"location":"how-to-guides/integrating-autoware/creating-vehicle-interface-package/customizing-for-differential-drive-model/#3-known-issues","title":"3. Known issues","text":""},{"location":"how-to-guides/integrating-autoware/creating-vehicle-interface-package/customizing-for-differential-drive-model/#motion-model-incompatibility","title":"Motion model incompatibility","text":"

    Since Autoware assumes that vehicles use a steering system, it is not possible to take advantage of the flexibility of a differential drive system's motion model.

    For example, when planning a parking maneuver with the freespace_planner module, Autoware may drive the differential drive vehicle forward and backward, even if the vehicle can be parked with a simpler trajectory that uses pure rotational movement.

    "},{"location":"how-to-guides/integrating-autoware/creating-vehicle-interface-package/vehicle-interface/","title":"Vehicle interface overview","text":""},{"location":"how-to-guides/integrating-autoware/creating-vehicle-interface-package/vehicle-interface/#vehicle-interface","title":"Vehicle interface","text":""},{"location":"how-to-guides/integrating-autoware/creating-vehicle-interface-package/vehicle-interface/#what-is-the-vehicle-interface","title":"What is the vehicle interface?","text":"

    The purpose of the vehicle interface package is to convert the control messages calculated from the autoware into a form that the vehicle can understand and transmit them to the vehicle (CAN, Serial message, etc.) and to decode the data coming from the vehicle and publish it through the ROS 2 topics that the autoware expects. So, we can say that the purposes of the vehicle_interface are:

    1. Converting Autoware control commands to a vehicle-specific format. Example control commands of autoware:

      • lateral controls: steering tire angle, steering tire rotation rate
      • longitudinal controls: speed, acceleration, jerk
    2. Converting vehicle status information in a vehicle-specific format to Autoware messages.

    Autoware publishes control commands such as:

    • Velocity control
    • Steering control
    • Car light commands
    • etc.

    Then, the vehicle interface converts these commands into actuation such like:

    • Motor and brake activation
    • Steering-wheel operation
    • Lighting control
    • etc.

    So think of the vehicle interface as a module that runs the vehicle's control device to realize the input commands provided by Autoware.

    An example of inputs and outputs for vehicle interface

    There are two types of interfaces for controlling your own vehicle:

    1. Target steering and target velocity/acceleration interface. (It will be named as Type A for this document)
    2. Generalized target command interface (i.e., accel/brake pedal, steering torque). (It will be named as Type B for this document)

    For Type A, where the control interface encompasses target steering and target velocity/acceleration.

    • So, at this type, speed control is occurred by desired velocity or acceleration.
    • Steering control is occurred by desired steering angle and/or steering angle velocity.

    On the other hand, Type B, characterized by a generalized target command interface (e.g., accel/brake pedal, steering torque), introduces a more dynamic and adaptable control scheme. In this configuration, the vehicle controlled to direct input from autoware actuation commands which output of the raw_vehicle_cmd_converter, allowing for a more intuitive and human-like driving experience.

    If you use your own vehicle like this way (i.e., controlling gas and brake pedal), you need to use raw_vehicle_cmd_converter package to convert autoware output control cmd to brake, gas and steer map. In order to do that, you will need brake, gas and steering calibration. So, you can get the calibration with using accel_brake_map_calibrator package. Please follow the steps for calibration your vehicle actuation.

    The choice between these control interfaces profoundly influences the design and development process. If you are planning to use Type A, then you will control velocity or acceleration over drive by-wire systems. If type B is more suitable for your implementation, then you will need to control your vehicle's brake and gas pedal.

    "},{"location":"how-to-guides/integrating-autoware/creating-vehicle-interface-package/vehicle-interface/#communication-between-the-vehicle-interface-and-your-vehicles-control-device","title":"Communication between the vehicle interface and your vehicle's control device","text":"
    • If you are planning to drive by autoware with your vehicle, then your vehicle must satisfy some requirements:
    Type A Type B Your vehicle can be controlled in longitudinal direction by the target velocity or acceleration. Your vehicle can be controlled in longitudinal direction by the specific target commands. (i.e., brake and gas pedal) Your vehicle can be controlled in lateral direction by the target steering angle. (Optionally, you can use steering rate as well) Your vehicle can be controlled in lateral direction by the specific target commands. (i.e., steering torque) Your vehicle must provide velocity or acceleration information and steering information which is described at the vehicle status topics above. Your vehicle must provide velocity or acceleration information and steering information which is described at the vehicle status topics above.
    • You can also use mixed Type A and Type B, for example, you want to use lateral controlling with a steering angle and longitudinal controlling with specific targets. In order to do that, you must subscribe /control/command/control_cmd for getting steering information, and you must subscribe /control/command/actuation_cmd for gas and brake pedal actuation commands. Then, you must handle these messages on your own vehicle_interface design.
    • You must adopt your vehicle low-level controller to run with autoware.
    • Your vehicle can be controlled by the target shift mode and also needs to provide Autoware with shift information.
    • If you are using CAN communication on your vehicle, please refer to ros2_socketcan package by Autoware.
    • If you are using Serial communication on your vehicle, you can look at serial-port.
    "},{"location":"how-to-guides/integrating-autoware/creating-your-autoware-repositories/creating-autoware-repositories/","title":"Creating Autoware repositories","text":""},{"location":"how-to-guides/integrating-autoware/creating-your-autoware-repositories/creating-autoware-repositories/#creating-autoware-repositories","title":"Creating Autoware repositories","text":""},{"location":"how-to-guides/integrating-autoware/creating-your-autoware-repositories/creating-autoware-repositories/#what-is-a-meta-repository","title":"What is a Meta-repository?","text":"

    A meta-repository is a repository that manages multiple repositories, and Autoware is one of them. It serves as a centralized control point for referencing, configuring, and versioning other repositories. To accomplish this, the Autoware meta-repository includes the autoware.repos file for managing multiple repositories. We will use the VCS tool (Version Control System) to handle the .repos file. VCS provides us with the capability to import, export, and pull from multiple repositories. VCS will be used to import all the necessary repositories to build Autoware into our workspace. Please refer to the documentation for VCS and .repos file usage.

    "},{"location":"how-to-guides/integrating-autoware/creating-your-autoware-repositories/creating-autoware-repositories/#how-to-create-and-customize-your-autoware-meta-repository","title":"How to create and customize your autoware meta-repository","text":""},{"location":"how-to-guides/integrating-autoware/creating-your-autoware-repositories/creating-autoware-repositories/#1-create-autoware-meta-repository","title":"1. Create autoware meta-repository","text":"

    If you want to integrate Autoware into your vehicle, the first step is to create an Autoware meta-repository.

    One easy way is to fork the Autoware repository and clone it. (For instructions on how to fork a repository, refer to GitHub Docs)

    • In this guide, Autoware will be integrated into a tutorial_vehicle (Note: when setting up multiple types of vehicles, adding a suffix like autoware.vehicle_A or autoware.vehicle_B is recommended). For the first step, please visit the autoware repository and click the fork button. The fork process should look like this:

    Sample forking demonstration for tutorial_vehicle

    Then click \"Create fork\" button to continue. After that, we can clone our fork repository on our local system.

    git clone https://github.com/YOUR_NAME/autoware.<YOUR-VEHICLE>.git\n

    For example, it should be for our documentation:

    git clone https://github.com/leo-drive/autoware.tutorial_vehicle.git\n
    "},{"location":"how-to-guides/integrating-autoware/creating-your-autoware-repositories/creating-autoware-repositories/#11-create-vehicle-individual-repositories","title":"1.1 Create vehicle individual repositories","text":"

    To integrate Autoware into your individual vehicles, you need to fork and modify the following repositories as well:

    • sample_sensor_kit: This repository will be used for sensing launch files, their pipelines-organizations and sensor descriptions. Please fork and rename as autoware meta-repository. At this point, our forked repository name will be tutorial_vehicle_sensor_kit_launch.
    • sample_vehicle_launch: This repository will be used for vehicle launch files, vehicle_descriptions and vehicle_model. Please fork and rename this repository as well. At this point, our forked repository name will be tutorial_vehicle_launch.
    • autoware_individual_params: This repository stores parameters that change depending on each vehicle (i.e. sensor calibrations). Please fork and rename this repository as well; our forked repository name will be tutorial_vehicle_individual_params.
    • autoware_launch: This repository contains node configurations and their parameters for Autoware. Please fork and rename it as the previously forked repositories; our forked repository name will be autoware_launch.tutorial_vehicle.
    "},{"location":"how-to-guides/integrating-autoware/creating-your-autoware-repositories/creating-autoware-repositories/#2-customize-autowarerepos-for-your-environment","title":"2. Customize autoware.repos for your environment","text":"

    You need to customize your autoware.repos to import your forked repositories. The autoware.repos file usually includes information for all necessary Autoware repositories (except calibration and simulator repositories). Therefore, your forked repositories should also be added to this file.

    "},{"location":"how-to-guides/integrating-autoware/creating-your-autoware-repositories/creating-autoware-repositories/#21-adding-individual-repos-to-autowarerepos","title":"2.1 Adding individual repos to autoware.repos","text":"

    After forking all repositories, you can start adding them to your Autoware meta-repository by opening the autoware.repos file using any text editor and updating sample_sensor_kit_launch, sample_vehicle_launch, autoware_individual_params and autoware launch with your own individual repos. For example, in this tutorial, the necessary changes for our forked tutorial_vehicle repositories should be as follows:

    • Sensor Kit:

      - sensor_kit/sample_sensor_kit_launch:\n-   type: git\n-   url: https://github.com/autowarefoundation/sample_sensor_kit_launch.git\n-   version: main\n+ sensor_kit/tutorial_vehicle_sensor_kit_launch:\n+   type: git\n+   url: https://github.com/leo-drive/tutorial_vehicle_sensor_kit_launch.git\n+   version: main\n
    • Vehicle Launch:

      - vehicle/sample_vehicle_launch:\n-   type: git\n-   url: https://github.com/autowarefoundation/sample_vehicle_launch.git\n-   version: main\n+ vehicle/tutorial_vehicle_launch:\n+   type: git\n+   url: https://github.com/leo-drive/tutorial_vehicle_launch.git\n+   version: main\n
    • Individual Params:

      - param/autoware_individual_params:\n-   type: git\n-   url: https://github.com/autowarefoundation/autoware_individual_params.git\n-   version: main\n+ param/tutorial_vehicle_individual_params:\n+   type: git\n+   url: https://github.com/leo-drive/tutorial_vehicle_individual_params.git\n+   version: main\n
    • Autoware Launch:

      - launcher/autoware_launch:\n-   type: git\n-   url: https://github.com/autowarefoundation/autoware_launch.git\n-   version: main\n+ launcher/autoware_launch.tutorial_vehicle:\n+   type: git\n+   url: https://github.com/leo-drive/autoware_launch.tutorial_vehicle.git\n+   version: main\n

    Please make similar changes to your own autoware.repos file. After making these changes, you will be ready to use VCS to import all the necessary repositories into your Autoware workspace.

    First, create a src directory under your own Autoware meta-repository directory:

    cd <YOUR-AUTOWARE-DIR>\nmkdir src\n

    Then, import all necessary repositories with vcs:

    cd <YOUR-AUTOWARE-DIR>\nvcs import src < autoware.repos\n

    After the running vcs import command, all autoware repositories will be cloned in the src folder under the Autoware directory.

    Now, you can build your own repository with colcon build command:

    cd <YOUR-AUTOWARE-DIR>\ncolcon build --symlink-install --cmake-args -DCMAKE_BUILD_TYPE=Release\n

    Please refer to the following documentation links for instructions on how to create and customize each of your vehicle's packages:

    • Creating vehicle and sensor models
      • Creating sensor model
      • Creating individual params
      • Creating vehicle model
    • creating-vehicle-interface-package
    • customizing-for-differential-drive-model

    Please remember to add all your custom packages, such as interfaces and descriptions, to your autoware.repos to ensure that your packages are properly included and managed within the Autoware repository.

    "},{"location":"how-to-guides/integrating-autoware/launch-autoware/","title":"Launch Autoware","text":""},{"location":"how-to-guides/integrating-autoware/launch-autoware/#launch-autoware","title":"Launch Autoware","text":"

    This section explains how to run your vehicle with Autoware. We will explain how to run and launch autoware with these modules:

    • Vehicle
    • System
    • Map
    • Sensing
    • Localization
    • Perception
    • Planning
    • Control
    "},{"location":"how-to-guides/integrating-autoware/launch-autoware/#pre-requirements-of-launching-autoware-with-real-vehicle","title":"Pre-requirements of launching Autoware with real vehicle","text":"

    Please complete these steps for integration Autoware on your vehicle:

    • Create your Autoware meta-repository.
    • Create your vehicle and sensor model.
    • Calibrate your sensors.
    • Create your Autoware compatible vehicle interface.
    • Create your environment map.

    After the completion of these steps according to your individual vehicle, you are ready to use Autoware.

    "},{"location":"how-to-guides/integrating-autoware/launch-autoware/#autoware_launch-package","title":"autoware_launch package","text":"

    The autoware_launch package starts the initiation of Autoware software stack launch files. The autoware.launch.xml launch file enables the invocation of these module launches by enabling the following launch arguments:

      <arg name=\"launch_vehicle\" default=\"true\" description=\"launch vehicle\"/>\n<arg name=\"launch_system\" default=\"true\" description=\"launch system\"/>\n<arg name=\"launch_map\" default=\"true\" description=\"launch map\"/>\n<arg name=\"launch_sensing\" default=\"true\" description=\"launch sensing\"/>\n<arg name=\"launch_sensing_driver\" default=\"true\" description=\"launch sensing driver\"/>\n<arg name=\"launch_localization\" default=\"true\" description=\"launch localization\"/>\n<arg name=\"launch_perception\" default=\"true\" description=\"launch perception\"/>\n<arg name=\"launch_planning\" default=\"true\" description=\"launch planning\"/>\n<arg name=\"launch_control\" default=\"true\" description=\"launch control\"/>\n

    For example, if you don't need to launch perception, planning, and control for localization debug, you can disable these modules like the following:

    -  <arg name=\"launch_perception\" default=\"true\" description=\"launch perception\"/>\n+  <arg name=\"launch_perception\" default=\"false\" description=\"launch perception\"/>\n-  <arg name=\"launch_planning\" default=\"true\" description=\"launch planning\"/>\n+  <arg name=\"launch_planning\" default=\"false\" description=\"launch planning\"/>\n-  <arg name=\"launch_control\" default=\"true\" description=\"launch control\"/>\n+  <arg name=\"launch_control\" default=\"false\" description=\"launch control\"/>\n

    Also, it is possible to specify which components to launch using command-line arguments.

    ros2 launch autoware_launch autoware.launch.xml vehicle_model:=YOUR_VEHICLE sensor_kit:=YOUR_SENSOR_KIT map_path:=/PATH/TO/YOUR/MAP \\\nlaunch_perception:=false \\\nlaunch_planning:=false \\\nlaunch_control:=false\n

    Also, autoware_launch package includes autoware modules parameter files under the config directory.

    <YOUR-OWN-AUTOWARE-DIR>/\n  \u2514\u2500 src/\n       \u2514\u2500 launcher/\n            \u2514\u2500 autoware_launch/\n                 \u251c\u2500 config/\n                 \u251c\u2500     \u251c\u2500 control/\n                 \u251c\u2500     \u251c\u2500 localization/\n                 \u251c\u2500     \u251c\u2500 map/\n                 \u251c\u2500     \u251c\u2500 perception/\n                 \u251c\u2500     \u251c\u2500 planning/\n                 \u251c\u2500     \u251c\u2500 simulator/\n                 \u251c\u2500     \u2514\u2500 system/\n                 \u251c\u2500launch/\n                 \u2514\u2500 rviz/\n

    So, if we change any parameter in config directory, it will override the original parameter values since autoware_launch parameter file path is used for parameter loading.

    autoware_launch package launching and parameter migrating diagram"},{"location":"how-to-guides/integrating-autoware/launch-autoware/#configure-autowarelaunchxml","title":"Configure autoware.launch.xml","text":"

    As we mentioned above, we can enable or disable Autoware modules to launch by modifying autoware.launch.xml or using command-line arguments. Also, we have some arguments for specifying our Autoware configurations. Here are some basic configurations for the autoware.launch.xml launch file: (Additionally, you can use them as command-line arguments, as we mentioned before)

    "},{"location":"how-to-guides/integrating-autoware/launch-autoware/#vehicle","title":"Vehicle","text":"

    In the Vehicle section, you can choose whether the vehicle interface will be launched or not. For example, if you disable it, then vehicle_interface.launch.xml will not be called:

    - <arg name=\"launch_vehicle_interface\" default=\"true\" description=\"launch vehicle interface\"/>\n+ <arg name=\"launch_vehicle_interface\" default=\"false\" description=\"launch vehicle interface\"/>\n

    Please be sure your vehicle interface driver included in vehicle_interface.launch.xml, for more information you can refer the creating vehicle interface page.

    "},{"location":"how-to-guides/integrating-autoware/launch-autoware/#map","title":"Map","text":"

    If your point cloud and lanelet2 map names are different from pointcloud_map.pcd and lanelet2_map.osm, you will need to update these map file name arguments:

    - <arg name=\"lanelet2_map_file\" default=\"lanelet2_map.osm\" description=\"lanelet2 map file name\"/>\n+ <arg name=\"lanelet2_map_file\" default=\"<YOUR-LANELET2-MAP-FILE-NAME>\" description=\"lanelet2 map file name\"/>\n- <arg name=\"pointcloud_map_file\" default=\"pointcloud_map.pcd\" description=\"pointcloud map file name\"/>\n+ <arg name=\"pointcloud_map_file\" default=\"<YOUR-POINTCLOUD-MAP-FILE-NAME>\" description=\"pointcloud map file name\"/>\n
    "},{"location":"how-to-guides/integrating-autoware/launch-autoware/#perception","title":"Perception","text":"

    You can define your autoware_data path here. Autoware gets yabloc_pose_initializer, image_projection_based_fusion, lidar_apollo_instance_segmentation etc. models file with autoware_data path. If you use ansible for autoware installation, the necessary artifacts will be downloaded at autoware_data folder on your $HOME directory. If you want to download artifacts manually, please check ansible artifacts page for information.

    - <arg name=\"data_path\" default=\"$(env HOME)/autoware_data\" description=\"packages data and artifacts directory path\"/>\n+ <arg name=\"data_path\" default=\"<YOUR-AUTOWARE-DATA-PATH>\" description=\"packages data and artifacts directory path\"/>\n

    Also, you can change your perception method here. The Autoware provides camera-lidar-radar fusion, camera-lidar fusion, lidar-radar fusion, lidar only and radar only perception modes. The default perception method is lidar only mode, but if you want to use camera-lidar fusion you need to change your perception mode:

    -  <arg name=\"perception_mode\" default=\"lidar\" description=\"select perception mode. camera_lidar_radar_fusion, camera_lidar_fusion, lidar_radar_fusion, lidar, radar\"/>\n+  <arg name=\"perception_mode\" default=\"camera_lidar_fusion\" description=\"select perception mode. camera_lidar_radar_fusion, camera_lidar_fusion, lidar_radar_fusion, lidar, radar\"/>\n

    If you want to use traffic light recognition and visualization, you can set traffic_light_recognition/enable_fine_detection as true (default). Please check traffic_light_fine_detector page for more information. If you don't want to use traffic light classifier, then you can disable it:

    - <arg name=\"traffic_light_recognition/enable_fine_detection\" default=\"true\" description=\"enable traffic light fine detection\"/>\n+ <arg name=\"traffic_light_recognition/enable_fine_detection\" default=\"false\" description=\"enable traffic light fine detection\"/>\n

    Please look at Launch perception page for detailed information.

    "},{"location":"how-to-guides/integrating-autoware/launch-autoware/#launch-autoware_1","title":"Launch Autoware","text":"

    Launch Autoware with the following command:

    ros2 launch autoware_launch autoware_launch.launch.xml map_path:=<YOUR-MAP-PATH> vehicle_model:=<YOUR-VEHICLE-MODEL> sensor_model:=<YOUR-SENSOR-MODEL> vehicle_id:=<YOUR-VEHICLE-ID>\n

    It is possible to specify which components to launch using command-line arguments. For example, if you don't need to launch perception, planning, and control for localization debug, you can launch the following:

    ros2 launch autoware_launch autoware_launch.launch.xml map_path:=<YOUR-MAP-PATH> vehicle_model:=<YOUR-VEHICLE-MODEL> sensor_model:=<YOUR-SENSOR-MODEL> vehicle_id:=<YOUR-VEHICLE-ID> \\\nlaunch_perception:=false \\\nlaunch_planning:=false \\\nlaunch_control:=false\n

    After launching Autoware, we need to initialize our vehicle on our map. If you set gnss_poser for your GNSS/INS sensor at gnss.launch.xml, then gnss_poser will send pose for initialization. If you don't have a GNSS sensor, then you need to set initial pose manually.

    "},{"location":"how-to-guides/integrating-autoware/launch-autoware/#set-initial-pose","title":"Set initial pose","text":"

    If not or if the automatic initialization returns an incorrect position, you need to set the initial pose using the RViz GUI.

    • Click the 2D Pose estimate button in the toolbar, or hit the P key

    2D Pose estimate with RViz
    • In the 3D View panel, click and hold the left mouse button, and then drag to set the direction for the initial pose.

    Setting 2D Pose estimate with RViz
    • After that, the vehicle will be initialized. You will then observe both your vehicle and Autoware outputs.

    Initialization of the vehicle"},{"location":"how-to-guides/integrating-autoware/launch-autoware/#set-goal-pose","title":"Set goal pose","text":"

    Set a goal pose for the ego vehicle.

    • Click the 2D Nav Goal button in the toolbar, or hit the G key

    Initialization of the vehicle
    • In the 3D View pane, click and hold the left mouse button, and then drag to set the direction for the goal pose.

    Initialization of the vehicle
    • If successful, you will see the calculated planning path on RViz.

    Planned path on RViz"},{"location":"how-to-guides/integrating-autoware/launch-autoware/#engage","title":"Engage","text":"

    There are two options for engage:

    • Firstly, you can use AutowareStatePanel, it is included in Autoware RViz configuration file, but it can be found in Panels > Add New Panel > tier4_state_rviz_plugin > AutowareStatePanel.

    Once the route is computed, the \"AUTO\" button becomes active. Pressing the AUTO button engages the autonomous driving mode.

    Autoware state panel (STOP operation mode)
    • Secondly, you can engage with ROS 2 topic. In your terminal, execute the following command.
    source ~/<YOUR-AUTOWARE-DIR>/install/setup.bash\nros2 topic pub /<YOUR-AUTOWARE-DIR>/engage autoware_auto_vehicle_msgs/msg/Engage \"engage: true\" -1\n

    Now the vehicle should drive along the calculated path!

    During the autonomous driving, the StatePanel appears as shown in the image below. Pressing the \"STOP\" button allows you to stop the vehicle.

    Autoware state panel (AUTO operation mode)"},{"location":"how-to-guides/integrating-autoware/launch-autoware/control/","title":"Control Launch Files","text":""},{"location":"how-to-guides/integrating-autoware/launch-autoware/control/#control-launch-files","title":"Control Launch Files","text":""},{"location":"how-to-guides/integrating-autoware/launch-autoware/control/#overview","title":"Overview","text":"

    The Autoware control stacks start launching at autoware_launch.xml as mentioned on the Launch Autoware page. The autoware_launch package includes tier4_control_component.launch.xml for initiating control launch files invocation from autoware_launch.xml. The diagram below illustrates the flow of Autoware control launch files within the autoware_launch and autoware.universe packages.

    Autoware control launch flow diagram

    Note

    The Autoware project is a large project. Therefore, as we manage the Autoware project, we utilize specific arguments in the launch files. ROS 2 offers an argument-overriding feature for these launch files. Please refer to the official ROS 2 launch documentation for further information. For instance, if we define an argument at the top-level launch, it will override the value on lower-level launches.

    "},{"location":"how-to-guides/integrating-autoware/launch-autoware/control/#tier4_control_componentlaunchxml","title":"tier4_control_component.launch.xml","text":"

    The tier4_control_component.launch.xml launch file is the main control component launch in the autoware_launch package. This launch file calls control.launch.xml from the tier4_control_launch package within the autoware.universe repository. We can modify control launch arguments in tier4_control_component.launch.xml. Additionally, we can add any other necessary arguments that need adjustment since tier4_control_component.launch.xml serves as the top-level launch file for other control launch files. Here are some predefined control launch arguments:

    • lateral_controller_mode: This argument determines the lateral controller algorithm. The default value is mpc. To change it to pure pursuit, make the following update in your tier4_control_component.launch.xml file:

      - <arg name=\"lateral_controller_mode\" default=\"mpc\"/>\n+ <arg name=\"lateral_controller_mode\" default=\"pure_pursuit\"/>\n
    • enable_autonomous_emergency_braking: This argument enables autonomous emergency braking under specific conditions. Please refer to the Autonomous emergency braking (AEB) page for more information. To enable it, update the value in the tier4_control_component.launch.xml file:

      - <arg name=\"enable_autonomous_emergency_braking\" default=\"false\"/>\n+ <arg name=\"enable_autonomous_emergency_braking\" default=\"true\"/>\n
    • enable_predicted_path_checker: This argument enables the predicted path checker module. Please refer to the Predicted Path Checker page for more information. To enable it, update the value in the tier4_control_component.launch.xml file:

      - <arg name=\"enable_predicted_path_checker\" default=\"false\"/>\n+ <arg name=\"enable_predicted_path_checker\" default=\"true\"/>\n

    Note

    You can also use this arguments as command line arguments:

    ros2 launch autoware_launch autoware.launch.xml ... enable_predicted_path_checker:=true lateral_controller_mode:=pure_pursuit ...\n

    The predefined arguments in tier4_control_component.launch.xml have been explained above. However, numerous control arguments are included in the autoware_launch control config parameters.

    "},{"location":"how-to-guides/integrating-autoware/launch-autoware/localization/","title":"Localization Launch Files","text":""},{"location":"how-to-guides/integrating-autoware/launch-autoware/localization/#localization-launch-files","title":"Localization Launch Files","text":""},{"location":"how-to-guides/integrating-autoware/launch-autoware/localization/#overview","title":"Overview","text":"

    The Autoware localization stacks start launching at autoware_launch.xml as we mentioned at Launch Autoware page. The autoware_launch package includes tier4_localization_component.launch.xml for starting localization launch files invocation from autoware_launch.xml. This diagram describes some of the Autoware localization launch files flow at autoware_launch and autoware.universe packages.

    Autoware localization launch flow diagram

    Note

    The Autoware project is a large project. Therefore, as we manage the Autoware project, we utilize specific arguments in the launch files. ROS 2 offers an argument-overriding feature for these launch files. Please refer to the official ROS 2 launch documentation for further information. For instance, if we define an argument at the top-level launch, it will override the value on lower-level launches.

    "},{"location":"how-to-guides/integrating-autoware/launch-autoware/localization/#tier4_localization_componentlaunchxml","title":"tier4_localization_component.launch.xml","text":"

    The tier4_localization_component.launch.xml launch file is the main localization component launch at the autoware_launch package. This launch file calls localization.launch.xml at tier4_localization_launch package from autoware.universe repository. We can modify localization launch arguments at tier4_localization_component.launch.xml.

    The current localization launcher implemented by TIER IV supports multiple localization methods, both pose estimators and twist estimators. tier4_localization_component.launch.xml has two arguments to select which estimators to launch:

    • pose_source: This argument specifies the pose_estimator, currently supporting ndt (default), yabloc, artag and eagleye for localization. By default, Autoware launches ndt_scan_matcher for pose estimator. You can use YabLoc as a camera-based localization method. For more details on YabLoc, please refer to the README of YabLoc in autoware.universe. Also, you can use Eagleye as a GNSS & IMU & wheel odometry-based localization method. For more details on Eagleye, please refer to the Eagleye.

      You can set pose_source argument on tier4_localization_component.launch.xml, for example, if you want to use eagleye as pose_source, you need to update tier4_localization_component.launch.xml like:

      - <arg name=\"pose_source\" default=\"ndt\" description=\"select pose_estimator: ndt, yabloc, eagleye\"/>\n+ <arg name=\"pose_source\" default=\"eagleye\" description=\"select pose_estimator: ndt, yabloc, eagleye\"/>\n

      Also, you can use command-line for overriding launch arguments:

      ros2 launch autoware_launch autoware.launch.xml ... pose_source:=eagleye\n
    • twist_source: This argument specifies the twist_estimator, currently supporting gyro_odom (default), and eagleye. By default, Autoware launches gyro_odometer for twist estimator. Also, you can use eagleye for the twist source, please refer to the Eagleye. If you want to change your twist source to eagleye, you can update tier4_localization_component.launch.xml like:

      - <arg name=\"twist_source\" default=\"gyro_odom\" description=\"select twist_estimator. gyro_odom, eagleye\"/>\n+ <arg name=\"twist_source\" default=\"eagleye\" description=\"select twist_estimator. gyro_odom, eagleye\"/>\n

      Or you can use command-line for overriding launch arguments:

      ros2 launch autoware_launch autoware.launch.xml ... twist_source:=eagleye\n
    • input_pointcloud: This argument specifies the input pointcloud of the localization pointcloud pipeline. The default value is /sensing/lidar/top/outlier_filtered/pointcloud which is output of the pointcloud pre-processing pipeline from sensing. You can change this value according to your LiDAR topic name, or you can choose to use concatenated point cloud:

      - <arg name=\"input_pointcloud\" default=\"/sensing/lidar/top/outlier_filtered/pointcloud\" description=\"The topic will be used in the localization util module\"/>\n+ <arg name=\"input_pointcloud\" default=\"/sensing/lidar/concatenated/pointcloud\"/>\n

    You can add every necessary argument to tier4_localization_component.launch.xml launch file like these examples. In case, if you want to change your gyro odometer twist input topic, you can add this argument on tier4_localization_component.launch.xml launch file:

    + <arg name=\"input_vehicle_twist_with_covariance_topic\" value=\"<YOUR-VEHICLE-TWIST-TOPIC-NAME>\"/>\n

    Note: Gyro odometer input topic provided from velocity converter package. This package will be launched at sensor_kit. For more information, please check velocity converter package.

    "},{"location":"how-to-guides/integrating-autoware/launch-autoware/localization/#note-when-using-non-ndt-pose-estimator","title":"Note when using non NDT pose estimator","text":"

    Note

    Since NDT is currently the most often used pose_estimator , the NDT diagnostics are registered for monitoring by default. When using a pose_estimator other than NDT, NDT diagnostics will always be marked as stale, causing the system to enter a safe_fault state. Depending on the parameters of emergencies, this could escalate to a full emergency, preventing autonomous driving.

    To work around this, please modify the configuration file of the system_error_monitor. In the system_error_monitor.param.yaml file, /autoware/localization/performance_monitoring/matching_score represents the aggregated diagnostics for NDT. To prevent emergencies even when NDT is not launched, remove this entry from the configuration. Note that the module name /performance_monitoring/matching_score is specified in diagnostics_aggregator/localization.param.yaml.

    "},{"location":"how-to-guides/integrating-autoware/launch-autoware/localization/eagleye/","title":"Using Eagleye with Autoware","text":""},{"location":"how-to-guides/integrating-autoware/launch-autoware/localization/eagleye/#using-eagleye-with-autoware","title":"Using Eagleye with Autoware","text":"

    This page will show you how to set up Eagleye in order to use it with Autoware. For the details of the integration proposal, please refer to this discussion.

    "},{"location":"how-to-guides/integrating-autoware/launch-autoware/localization/eagleye/#what-is-eagleye","title":"What is Eagleye?","text":"

    Eagleye is an open-source GNSS/IMU-based localizer initially developed by MAP IV. Inc. It provides a cost-effective alternative to LiDAR and point cloud-based localization by using low-cost GNSS and IMU sensors to provide vehicle position, orientation, and altitude information.

    "},{"location":"how-to-guides/integrating-autoware/launch-autoware/localization/eagleye/#dependencies","title":"Dependencies","text":"

    The below packages are automatically installed during the setup of Autoware as they are listed in autoware.repos.

    1. Eagleye (autoware-main branch)
    2. RTKLIB ROS Bridge (ros2-v0.1.0 branch)
    3. LLH Converter (ros2 branch)
    "},{"location":"how-to-guides/integrating-autoware/launch-autoware/localization/eagleye/#architecture","title":"Architecture","text":"

    Eagleye can be utilized in the Autoware localization stack in two ways:

    1. Feed only twist into the EKF localizer.

    2. Feed both twist and pose from Eagleye into the EKF localizer (twist can also be used with regular gyro_odometry).

    Note: RTK positioning is required when using Eagleye as the pose estimator. On the other hand, it is not mandatory when using it as the twist estimator.

    "},{"location":"how-to-guides/integrating-autoware/launch-autoware/localization/eagleye/#requirements","title":"Requirements","text":"

    Eagleye requires GNSS, IMU and vehicle speed as inputs.

    "},{"location":"how-to-guides/integrating-autoware/launch-autoware/localization/eagleye/#imu-topic","title":"IMU topic","text":"

    sensor_msgs/msg/Imu is supported for Eagleye IMU input.

    "},{"location":"how-to-guides/integrating-autoware/launch-autoware/localization/eagleye/#vehicle-speed-topic","title":"Vehicle speed topic","text":"

    geometry_msgs/msg/TwistStamped and geometry_msgs/msg/TwistWithCovarianceStamped are supported for the input vehicle speed.

    "},{"location":"how-to-guides/integrating-autoware/launch-autoware/localization/eagleye/#gnss-topic","title":"GNSS topic","text":"

    Eagleye requires latitude/longitude height and doppler velocity generated by the GNSS receiver. Your GNSS ROS driver must publish the following messages:

    • sensor_msgs/msg/NavSatFix: This message contains latitude, longitude, and height information.
    • geometry_msgs/msg/TwistWithCovarianceStamped: This message contains gnss doppler velocity information.

      Eagleye has been tested with the following example GNSS ROS drivers: ublox_gps and septentrio_gnss_driver. The settings needed for each of these drivers are as follows:

    GNSS ROS drivers modification ublox_gps No additional settings are required. It publishes sensor_msgs/msg/NavSatFix and geometry_msgs/msg/TwistWithCovarianceStamped required by Eagleye with default settings. septentrio_gnss_driver Set publish.navsatfix and publish.twist in the config file gnss.yaml to true"},{"location":"how-to-guides/integrating-autoware/launch-autoware/localization/eagleye/#parameter-modifications-for-integration-into-your-vehicle","title":"Parameter Modifications for Integration into Your Vehicle","text":""},{"location":"how-to-guides/integrating-autoware/launch-autoware/localization/eagleye/#topic-name-topic-type","title":"topic name & topic type","text":"

    The users must correctly specify input topics for GNSS latitude, longitude, and height , GNSS doppler speed , IMU , and vehicle speed in the eagleye_config.yaml.

    # Topic\ntwist:\ntwist_type: 1 # TwistStamped : 0, TwistWithCovarianceStamped: 1\ntwist_topic: /sensing/vehicle_velocity_converter/twist_with_covariance\nimu_topic: /sensing/imu/tamagawa/imu_raw\ngnss:\nvelocity_source_type: 2 # rtklib_msgs/RtklibNav: 0, nmea_msgs/Sentence: 1, ublox_msgs/NavPVT: 2, geometry_msgs/TwistWithCovarianceStamped: 3\nvelocity_source_topic: /sensing/gnss/ublox/navpvt\nllh_source_type: 2 # rtklib_msgs/RtklibNav: 0, nmea_msgs/Sentence: 1, sensor_msgs/NavSatFix: 2\nllh_source_topic: /sensing/gnss/ublox/nav_sat_fix\n
    "},{"location":"how-to-guides/integrating-autoware/launch-autoware/localization/eagleye/#sensor-frequency","title":"sensor frequency","text":"

    Also, the frequency of GNSS and IMU must be set in eagleye_config.yaml

    common:\nimu_rate: 50\ngnss_rate: 5\n
    "},{"location":"how-to-guides/integrating-autoware/launch-autoware/localization/eagleye/#conversion-from-fix-to-pose","title":"Conversion from fix to pose","text":"

    The parameters for converting sensor_msgs/msg/NavSatFix to geometry_msgs/msg/PoseWithCovarianceStamped is listed in geo_pose_converter.launch.xml. If you use a different geoid or projection type, change these parameters.

    "},{"location":"how-to-guides/integrating-autoware/launch-autoware/localization/eagleye/#other-parameters","title":"Other parameters","text":"

    The other parameters are described here. Basically, these do not need to be changed .

    "},{"location":"how-to-guides/integrating-autoware/launch-autoware/localization/eagleye/#notes-on-initialization","title":"Notes on initialization","text":"

    Eagleye requires an initialization process for proper operation. Without initialization, the output for twist will be in the raw value, and the pose data will not be available.

    "},{"location":"how-to-guides/integrating-autoware/launch-autoware/localization/eagleye/#1-static-initialization","title":"1. Static Initialization","text":"

    The first step is static initialization, which involves allowing the Eagleye to remain stationary for approximately 5 seconds after startup to estimate the yaw-rate offset.

    "},{"location":"how-to-guides/integrating-autoware/launch-autoware/localization/eagleye/#2-dynamic-initialization","title":"2. Dynamic initialization","text":"

    The next step is dynamic initialization, which involves running the Eagleye in a straight line for approximately 30 seconds. This process estimates the scale factor of wheel speed and azimuth angle.

    Once dynamic initialization is complete, the Eagleye will be able to provide corrected twist and pose data.

    "},{"location":"how-to-guides/integrating-autoware/launch-autoware/localization/eagleye/#how-to-check-the-progress-of-initialization","title":"How to check the progress of initialization","text":"
    • TODO
    "},{"location":"how-to-guides/integrating-autoware/launch-autoware/localization/eagleye/#note-on-georeferenced-maps","title":"Note on georeferenced maps","text":"

    Note that the output position might not appear to be in the point cloud maps if you are using maps that are not properly georeferenced. In the case of a single GNSS antenna, initial position estimation (dynamic initialization) can take several seconds to complete after starting to run in an environment where GNSS positioning is available.

    "},{"location":"how-to-guides/integrating-autoware/launch-autoware/map/","title":"Map Launch Files","text":""},{"location":"how-to-guides/integrating-autoware/launch-autoware/map/#map-launch-files","title":"Map Launch Files","text":""},{"location":"how-to-guides/integrating-autoware/launch-autoware/map/#overview","title":"Overview","text":"

    The Autoware map stacks start launching at autoware_launch.xml as we mentioned at Launch Autoware page. The autoware_launch package includes tier4_map_component.launch.xml for starting map launch files invocation from autoware_launch.xml. This diagram describes some of the Autoware map launch files flow at autoware_launch and autoware.universe packages.

    Autoware map launch flow diagram

    Note

    The Autoware project is a large project. Therefore, as we manage the Autoware project, we utilize specific arguments in the launch files. ROS 2 offers an argument-overriding feature for these launch files. Please refer to the official ROS 2 launch documentation for further information. For instance, if we define an argument at the top-level launch, it will override the value on lower-level launches.

    The map.launch.py launch file from the tier4_map_launch package directly includes the necessary node definitions for mapping. In the current design of Autoware, the lanelet2_map_loader, lanelet2_map_visualization, pointcloud_map_loader, and vector_map_tf_generator composable nodes are included in the map_container.

    We don't have many modification options in the map launching files (as the parameters are included in the config files). However, you can specify the names for your pointcloud and lanelet2 map during the launch (the default values are pointcloud_map.pcd and lanelet2_map.osm). For instance, if you wish to change your map file names, you can run Autoware using the following command line arguments:

    ros2 launch autoware_launch autoware.launch.xml ... pointcloud_map_file:=<YOUR-PCD-FILE-NAME> lanelet2_map_file:=<YOUR-LANELET2-MAP-NAME> ...\n

    Or you can change it on your autoware.launch.xml launch file:

    - <arg name=\"lanelet2_map_file\" default=\"lanelet2_map.osm\" description=\"lanelet2 map file name\"/>\n+ <arg name=\"lanelet2_map_file\" default=\"<YOUR-LANELET2-MAP-NAME>\" description=\"lanelet2 map file name\"/>\n- <arg name=\"pointcloud_map_file\" default=\"pointcloud_map.pcd\" description=\"pointcloud map file name\"/>\n+ <arg name=\"pointcloud_map_file\" default=\"<YOUR-PCD-FILE-NAME>\" description=\"pointcloud map file name\"/>\n
    "},{"location":"how-to-guides/integrating-autoware/launch-autoware/perception/","title":"Perception Launch Files","text":""},{"location":"how-to-guides/integrating-autoware/launch-autoware/perception/#perception-launch-files","title":"Perception Launch Files","text":""},{"location":"how-to-guides/integrating-autoware/launch-autoware/perception/#overview","title":"Overview","text":"

    The Autoware perception stacks start launching at autoware_launch.xml as we mentioned at Launch Autoware page. The autoware_launch package includes tier4_perception_component.launch.xml for starting perception launch files invocation from autoware_launch.xml. This diagram describes some of the Autoware perception launch files flow at autoware_launch and autoware.universe packages.

    Autoware perception launch flow diagram

    Note

    The Autoware project is a large project. Therefore, as we manage the Autoware project, we utilize specific arguments in the launch files. ROS 2 offers an argument-overriding feature for these launch files. Please refer to the official ROS 2 launch documentation for further information. For instance, if we define an argument at the top-level launch, it will override the value on lower-level launches.

    "},{"location":"how-to-guides/integrating-autoware/launch-autoware/perception/#tier4_perception_componentlaunchxml","title":"tier4_perception_component.launch.xml","text":"

    The tier4_perception_component.launch.xml launch file is the main perception component launch at the autoware_launch package. This launch file calls perception.launch.xml at tier4_perception_launch package from autoware.universe repository. We can modify perception launch arguments at tier4_perception_component.launch.xml. Also, we can add any other necessary arguments that we want to change it since tier4_perception_component.launch.xml is the top-level launch file of other perception launch files. Here are some predefined perception launch arguments:

    • occupancy_grid_map_method: This argument determines the occupancy grid map method for perception stack. Please check probabilistic_occupancy_grid_map package for detailed information. The default probabilistic occupancy grid map method is pointcloud_based_occupancy_grid_map. If you want to change it to the laserscan_based_occupancy_grid_map, you can change it here:

      - <arg name=\"occupancy_grid_map_method\" default=\"pointcloud_based_occupancy_grid_map\" description=\"options: pointcloud_based_occupancy_grid_map, laserscan_based_occupancy_grid_map\"/>\n+ <arg name=\"occupancy_grid_map_method\" default=\"laserscan_based_occupancy_grid_map\" description=\"options: pointcloud_based_occupancy_grid_map, laserscan_based_occupancy_grid_map\"/>\n
    • detected_objects_filter_method: This argument determines the filter method for detected objects. Please check detected_object_validation package for detailed information about lanelet and position filter. The default detected object filter method is lanelet_filter. If you want to change it to the position_filter, you can change it here:

      - <arg name=\"detected_objects_filter_method\" default=\"lanelet_filter\" description=\"options: lanelet_filter, position_filter\"/>\n+ <arg name=\"detected_objects_filter_method\" default=\"position_filter\" description=\"options: lanelet_filter, position_filter\"/>\n
    • detected_objects_validation_method: This argument determines the validation method for detected objects. Please check detected_object_validation package for detailed information about validation methods. The default detected object filter method is obstacle_pointcloud. If you want to change it to the occupancy_grid, you can change it here, but remember it requires laserscan_based_occupancy_grid_map method as occupancy_grid_map_method:

      - <arg name=\"occupancy_grid_map_method\" default=\"pointcloud_based_occupancy_grid_map\" description=\"options: pointcloud_based_occupancy_grid_map, laserscan_based_occupancy_grid_map\"/>\n+ <arg name=\"occupancy_grid_map_method\" default=\"laserscan_based_occupancy_grid_map\" description=\"options: pointcloud_based_occupancy_grid_map, laserscan_based_occupancy_grid_map\"/>\n <arg\n  name=\"detected_objects_validation_method\"\n- default=\"obstacle_pointcloud\"\n+ default=\"occupancy_grid\"\n description=\"options: obstacle_pointcloud, occupancy_grid (occupancy_grid_map_method must be laserscan_based_occupancy_grid_map)\"\n  />\n

    Note

    You can also use this arguments as command line arguments:

    ros2 launch autoware_launch autoware.launch.xml ... detected_objects_filter_method:=lanelet_filter occupancy_grid_map_method:=laserscan_based_occupancy_grid_map ...\n

    The predefined tier4_perception_component.launch.xml arguments explained above, but there is the lot of perception arguments included in perception.launch.xml launch file at tier4_perception_launch. Since we didn't fork autoware.universe repository, we can add the necessary launch argument to tier4_perception_component.launch.xml file. Please follow the guidelines for some examples.

    "},{"location":"how-to-guides/integrating-autoware/launch-autoware/perception/#perceptionlaunchxml","title":"perception.launch.xml","text":"

    The perception.launch.xml launch file is the main perception launch at the autoware.universe. This launch file calls necessary perception launch files as we mentioned Autoware perception launch flow diagram above. The top-level launch file of perception.launch.xml is tier4_perception_component.launch.xml, so if we want to change anything on perception.launch.xml, we will apply these changes tier4_perception_component.launch.xml instead of perception.launch.xml.

    Here are some example changes for the perception pipeline:

    • remove_unknown: This parameter determines the remove unknown objects at camera-lidar fusion. Please check roi_cluster_fusion node for detailed information. The default value is true. If you want to change it to the false, you can add this argument to tier4_perception_component.launch.xml, so it will override the perception.launch.xml's argument:

      + <arg name=\"remove_unknown\" default=\"false\"/>\n
    • camera topics: If you are using camera-lidar fusion or camera-lidar-radar fusion as a perception_mode, you can add your camera and info topics on tier4_perception_component.launch.xml as well, it will override the perception.launch.xml launch file arguments:

      + <arg name=\"image_raw0\" default=\"/sensing/camera/camera0/image_rect_color\" description=\"image raw topic name\"/>\n+ <arg name=\"camera_info0\" default=\"/sensing/camera/camera0/camera_info\" description=\"camera info topic name\"/>\n+ <arg name=\"detection_rois0\" default=\"/perception/object_recognition/detection/rois0\" description=\"detection rois output topic name\"/>\n...\n

    You can add every necessary argument to tier4_perception_component.launch.xml launch file like these examples.

    "},{"location":"how-to-guides/integrating-autoware/launch-autoware/planning/","title":"Planning Launch Files","text":""},{"location":"how-to-guides/integrating-autoware/launch-autoware/planning/#planning-launch-files","title":"Planning Launch Files","text":""},{"location":"how-to-guides/integrating-autoware/launch-autoware/planning/#overview","title":"Overview","text":"

    The Autoware planning stacks start launching at autoware_launch.xml as mentioned on the Launch Autoware page. The autoware_launch package includes tier4_planning_component.launch.xml for initiating planning launch files invocation from autoware_launch.xml. The diagram below illustrates the flow of Autoware planning launch files within the autoware_launch and autoware.universe packages.

    Autoware planning launch flow diagram

    Note

    The Autoware project is a large project. Therefore, as we manage the Autoware project, we utilize specific arguments in the launch files. ROS 2 offers an argument-overriding feature for these launch files. Please refer to the official ROS 2 launch documentation for further information. For instance, if we define an argument at the top-level launch, it will override the value on lower-level launches.

    "},{"location":"how-to-guides/integrating-autoware/launch-autoware/planning/#tier4_planning_componentlaunchxml","title":"tier4_planning_component.launch.xml","text":"

    The tier4_planning_component.launch.xml launch file is the main planning component launch at the autoware_launch package. This launch file calls planning.launch.xml at tier4_planning_launch package from autoware.universe repository. We can modify planning launch arguments at tier4_planning_component.launch.xml. Also, we can add any other necessary arguments that we want to change it since tier4_planning_component.launch.xml is the top-level launch file of other planning launch files. Here are some predefined planning launch arguments:

    • use_experimental_lane_change_function: This argument enables enable_collision_check_at_prepare_phase, use_predicted_path_outside_lanelet, and use_all_predicted_path options for Autoware for experimental lane changing (for more information, please refer to lane_change documentation). The default value is True. To set it to False, make the following change in the tier4_planning_component.launch.xml file:

      - <arg name=\"use_experimental_lane_change_function\" default=\"true\"/>\n+ <arg name=\"use_experimental_lane_change_function\" default=\"false\"/>\n
    • cruise_planner_type: There are two types of cruise planners in Autoware: obstacle_stop_planner and obstacle_cruise_planner. For specifications on these cruise planner types, please refer to the package documentation. The default cruise planner is obstacle_stop_planner. To change it to obstacle_cruise_planner, update the argument value in the tier4_planning_component.launch.xml file:

      - <arg name=\"cruise_planner_type\" default=\"obstacle_stop_planner\" description=\"options: obstacle_stop_planner, obstacle_cruise_planner, none\"/>\n+ <arg name=\"cruise_planner_type\" default=\"obstacle_cruise_planner\" description=\"options: obstacle_stop_planner, obstacle_cruise_planner, none\"/>\n
    • use_surround_obstacle_check: This argument enables the surround_obstacle_checker for Autoware. If you want to disable it, you can do in the tier4_planning_component.launch.xml file:

      - <arg name=\"use_surround_obstacle_check\" default=\"true\"/>\n+ <arg name=\"use_surround_obstacle_check\" default=\"false\"/>\n
    • velocity_smoother_type: This argument specifies the type of smoother for the motion_velocity_smoother package. Please consult the documentation for detailed information about available smoother types. For instance, if you wish to change your smoother type from JerkFiltered to L2, you can do in the tier4_planning_component.launch.xml file.:

      - <arg name=\"velocity_smoother_type\" default=\"JerkFiltered\" description=\"options: JerkFiltered, L2, Analytical, Linf(Unstable)\"/>\n+ <arg name=\"velocity_smoother_type\" default=\"L2\" description=\"options: JerkFiltered, L2, Analytical, Linf(Unstable)\"/>\n

    Note

    You can also use this arguments as command line arguments:

    ros2 launch autoware_launch autoware.launch.xml ... use_surround_obstacle_check:=false velocity_smoother_type:=L2 ...\n

    The predefined arguments in tier4_planning_component.launch.xml have been explained above. However, numerous planning arguments are included in the autoware_launch planning config parameters.

    "},{"location":"how-to-guides/integrating-autoware/launch-autoware/sensing/","title":"Sensing Launch Files","text":""},{"location":"how-to-guides/integrating-autoware/launch-autoware/sensing/#sensing-launch-files","title":"Sensing Launch Files","text":""},{"location":"how-to-guides/integrating-autoware/launch-autoware/sensing/#overview","title":"Overview","text":"

    The Autoware sensing stacks start launching at autoware_launch.xml as we mentioned at Launch Autoware page. The autoware_launch package includes tier4_sensing_component.launch.xml for starting sensing launch files invocation from autoware_launch.xml. This diagram describes some of the Autoware sensing launch files flow at autoware_launch and autoware.universe packages.

    Autoware sensing launch flow diagram

    Note

    The Autoware project is a large project. Therefore, as we manage the Autoware project, we utilize specific arguments in the launch files. ROS 2 offers an argument-overriding feature for these launch files. Please refer to the official ROS 2 launch documentation for further information. For instance, if we define an argument at the top-level launch, it will override the value on lower-level launches.

    The sensing launch is more related to your sensor kit, so if you want to modify your launch, we recommend applying these modifications to the packages. Please look at creating sensor and vehicle model pages for more information but there is are some modifications on which can you done at top-level launch files.

    For example, if you do not want to launch the sensor driver with Autoware, you can disable it with a command-line argument:

    ros2 launch autoware_launch autoware.launch.xml ... launch_sensing_driver:=false ...\n

    Or you can change it on your autoware.launch.xml launch file:

    - <arg name=\"launch_sensing_driver\" default=\"true\" description=\"launch sensing driver\"/>\n+ <arg name=\"launch_sensing_driver\" default=\"false\" description=\"launch sensing driver\"/>\n
    "},{"location":"how-to-guides/integrating-autoware/launch-autoware/system/","title":"System Launch Files","text":""},{"location":"how-to-guides/integrating-autoware/launch-autoware/system/#system-launch-files","title":"System Launch Files","text":""},{"location":"how-to-guides/integrating-autoware/launch-autoware/system/#overview","title":"Overview","text":"

    The Autoware system stacks start launching at autoware_launch.xml as we mentioned at Launch Autoware page. The autoware_launch package includes tier4_system_component.launch.xml for starting system launch files invocation from autoware_launch.xml. This diagram describes some of the Autoware system launch files flow at autoware_launch and autoware.universe packages.

    Autoware system launch flow diagram

    Note

    The Autoware project is a large project. Therefore, as we manage the Autoware project, we utilize specific arguments in the launch files. ROS 2 offers an argument-overriding feature for these launch files. Please refer to the official ROS 2 launch documentation for further information. For instance, if we define an argument at the top-level launch, it will override the value on lower-level launches.

    As described in the flow diagram of the system launch pipeline, the system.launch.xml from the tier4_system_launch package directly launches the following packages:

    • system_monitor
    • component_interface_tools
    • component_state_monitor
    • system_error_monitor
    • emergency_handler
    • duplicated_node_checker
    • mrm_comfortable_stop_operator
    • mrm_emergency_stop_operator
    • dummy_diag_publisher

    We don't have many modification options in the system launching files (as the parameters are included in config files), but you can modify the file path for the system_error_monitor parameter. For instance, if you want to change the path for your system_error_monitor.param.yaml file, you can run Autoware with the following command line argument:

    ros2 launch autoware_launch autoware.launch.xml ... system_error_monitor_param_path:=<YOUR-SYSTEM-ERROR-PARAM-PATH> ...\n

    Or you can change it on your tier4_system_component.launch.xml launch file:

    - <arg name=\"system_error_monitor_param_path\" default=\"$(find-pkg-share autoware_launch)/config/...\"/>\n+ <arg name=\"system_error_monitor_param_path\" default=\"<YOUR-SYSTEM-ERROR-PARAM-PATH>\"/>\n
    "},{"location":"how-to-guides/integrating-autoware/launch-autoware/vehicle/","title":"Vehicle Launch Files","text":""},{"location":"how-to-guides/integrating-autoware/launch-autoware/vehicle/#vehicle-launch-files","title":"Vehicle Launch Files","text":""},{"location":"how-to-guides/integrating-autoware/launch-autoware/vehicle/#overview","title":"Overview","text":"

    The Autoware vehicle stacks begin launching with autoware_launch.xml as mentioned on the Launch Autoware page, and the tier4_vehicle.launch.xml is called in this context. The following diagram describes some of the Autoware vehicle launch file flow within the autoware_launch and autoware.universe packages.

    Autoware vehicle launch flow diagram

    Note

    The Autoware project is a large project. Therefore, as we manage the Autoware project, we utilize specific arguments in the launch files. ROS 2 offers an argument-overriding feature for these launch files. Please refer to the official ROS 2 launch documentation for further information. For instance, if we define an argument at the top-level launch, it will override the value on lower-level launches.

    We don't have many modification options in the vehicle launching files (as the parameters are included in the vehicle_launch repository), but you can choose to disable the vehicle_interface launch. For instance, if you want to run robot_state_publisher but not vehicle_interface, you can launch Autoware with the following command line arguments:

    ros2 launch autoware_launch autoware.launch.xml ... launch_vehicle_interface:=false ...\n

    Or you can change it on your autoware.launch.xml launch file:

    - <arg name=\"launch_vehicle_interface\" default=\"true\" description=\"launch vehicle interface\"/>\n+ <arg name=\"launch_vehicle_interface\" default=\"false\" description=\"launch vehicle interface\"/>\n
    "},{"location":"how-to-guides/integrating-autoware/tuning-parameters-and-performance/evaluating-controller-performance/","title":"Evaluating the controller performance","text":""},{"location":"how-to-guides/integrating-autoware/tuning-parameters-and-performance/evaluating-controller-performance/#evaluating-the-controller-performance","title":"Evaluating the controller performance","text":"

    This page shows how to use control_performance_analysis package to evaluate the controllers.

    control_performance_analysis is the package to analyze the tracking performance of a control module and monitor the driving status of the vehicle.

    If you need more detailed information about package, refer to the control_performance_analysis.

    "},{"location":"how-to-guides/integrating-autoware/tuning-parameters-and-performance/evaluating-controller-performance/#how-to-use","title":"How to use","text":""},{"location":"how-to-guides/integrating-autoware/tuning-parameters-and-performance/evaluating-controller-performance/#before-driving","title":"Before Driving","text":""},{"location":"how-to-guides/integrating-autoware/tuning-parameters-and-performance/evaluating-controller-performance/#1-firstly-you-need-to-launch-autoware-you-can-also-use-this-tool-with-real-vehicle-driving","title":"1. Firstly you need to launch Autoware. You can also use this tool with real vehicle driving","text":""},{"location":"how-to-guides/integrating-autoware/tuning-parameters-and-performance/evaluating-controller-performance/#2-initialize-the-vehicle-and-send-goal-position-to-create-route","title":"2. Initialize the vehicle and send goal position to create route","text":"
    • If you have any problem with launching Autoware, please see the tutorials page.
    "},{"location":"how-to-guides/integrating-autoware/tuning-parameters-and-performance/evaluating-controller-performance/#3-launch-the-control_performance_analysis-package","title":"3. Launch the control_performance_analysis package","text":"
    ros2 launch control_performance_analysis control_performance_analysis.launch.xml\n
    • After this command, you should be able to see the driving monitor and error variables in topics.
    "},{"location":"how-to-guides/integrating-autoware/tuning-parameters-and-performance/evaluating-controller-performance/#4-run-the-plotjuggler-in-sourced-terminal","title":"4. Run the PlotJuggler in sourced terminal","text":"
    source ~/autoware/install/setup.bash\n
    ros2 run plotjuggler plotjuggler\n
    • If you do not have PlotJuggler in your computer, please refer here for installation guideline.
    "},{"location":"how-to-guides/integrating-autoware/tuning-parameters-and-performance/evaluating-controller-performance/#5-increase-the-buffer-size-maximum-is-100-and-import-the-layout-from-autowareuniversecontrolcontrol_performance_analysisconfigcontroller_monitorxml","title":"5. Increase the buffer size (maximum is 100), and import the layout from /autoware.universe/control/control_performance_analysis/config/controller_monitor.xml","text":"
    • After import the layout, please specify the topics that are listed below.
    • /localization/kinematic_state
    • /vehicle/status/steering_status
    • /control_performance/driving_status
    • /control_performance/performance_vars
    • Please mark the If present, use the timestamp in the field [header.stamp] box, then select the OK.
    "},{"location":"how-to-guides/integrating-autoware/tuning-parameters-and-performance/evaluating-controller-performance/#6-now-you-can-start-to-driving-you-should-see-all-the-performance-and-driving-variables-in-plotjuggler","title":"6. Now, you can start to driving. You should see all the performance and driving variables in PlotJuggler","text":""},{"location":"how-to-guides/integrating-autoware/tuning-parameters-and-performance/evaluating-controller-performance/#after-driving","title":"After Driving","text":""},{"location":"how-to-guides/integrating-autoware/tuning-parameters-and-performance/evaluating-controller-performance/#1-you-can-export-the-statistical-output-and-all-data-to-compare-and-later-usage","title":"1. You can export the statistical output and all data to compare and later usage","text":"
    • With statistical data, you can export the all statistical values like (min, max, average) to compare the controllers.
    • You can also export all data to later use. To investigate them again, after launch PlotJuggler, import the .cvs file from data section.
    "},{"location":"how-to-guides/integrating-autoware/tuning-parameters-and-performance/evaluating-controller-performance/#tips","title":"Tips","text":"
    • You can plot the vehicle position. Select the two curve (keeping CTRL key pressed) and Drag & Drop them using the RIGHT Mouse button. Please visit the Help -> Cheatsheet in PlotJuggler to see more tips about it.
    • If you see too much noised curve in plots, you can adjust the odom_interval and low_pass_filter_gain from here to avoid noised data.
    "},{"location":"how-to-guides/integrating-autoware/tuning-parameters-and-performance/evaluating-real-time-performance/","title":"Evaluating real-time performance","text":""},{"location":"how-to-guides/integrating-autoware/tuning-parameters-and-performance/evaluating-real-time-performance/#evaluating-real-time-performance","title":"Evaluating real-time performance","text":""},{"location":"how-to-guides/integrating-autoware/tuning-parameters-and-performance/evaluating-real-time-performance/#introduction","title":"Introduction","text":"

    Autoware should be real-time system when integrated to a service. Therefore, the response time of each callback should be as small as possible. If Autoware appears to be slow, it is imperative to conduct performance measurements and implement improvements based on the analysis. However, Autoware is a complex software system comprising numerous ROS 2 nodes, potentially complicating the process of identifying bottlenecks. To address this challenge, we will discuss methods for conducting detailed performance measurements for Autoware and provide case studies. It is worth noting that multiple factors can contribute to poor performance, such as scheduling and memory allocation in the OS layer, but our focus in this page will be on user code bottlenecks. The outline of this section is as follows:

    • Performance measurement
      • Single node execution
      • Prepare separated cores
      • Run single node separately
      • Measurement and visualization
    • Case studies
      • Sensing component
      • Planning component
    "},{"location":"how-to-guides/integrating-autoware/tuning-parameters-and-performance/evaluating-real-time-performance/#performance-measurement","title":"Performance measurement","text":"

    Improvement is impossible without precise measurements. To measure the performance of the application code, it is essential to eliminate any external influences. Such influences include interference from the operating system and CPU frequency fluctuations. Scheduling effects also occur when core resources are shared by multiple threads. This section outlines a technique for accurately measuring the performance of the application code for a specific node. Though this section only discusses the case of Linux on Intel CPUs, similar considerations should be made in other environments.

    "},{"location":"how-to-guides/integrating-autoware/tuning-parameters-and-performance/evaluating-real-time-performance/#single-node-execution","title":"Single node execution","text":"

    To eliminate the influence of scheduling, the node being measured should operate independently, using the same logic as when the entire Autoware system is running. To accomplish this, record all input topics of the node to be measured while the whole Autoware system is running. To achieve this objective, a tool called ros2_single_node_replayer has been prepared.

    Details on how to use the tool can be found in the README. This tool records the input topics of a specific node during the entire Autoware operation and replays it in a single node with the same logic. The tool relies on the ros2 bag record command, and the recording of service/action is not supported as of ROS 2 Humble, so nodes that use service/action as their main logic may not work well.

    "},{"location":"how-to-guides/integrating-autoware/tuning-parameters-and-performance/evaluating-real-time-performance/#prepare-separated-cores","title":"Prepare separated cores","text":"

    Isolated cores running the node to be measured must meet the following conditions.

    • Fix CPU frequency and disable turbo boost
    • Minimize timer interruptions
    • Offload RCU (Read Copy Update) callback
    • Isolate the paired core if hyper-threading enabled

    To fulfill these conditions on Linux, a custom kernel build with the following kernel configurations is required. You can find many resources to instruct you on how to build a custom Linux kernel (like this one). Note that even if full tickless is enabled, timer interrupts are generated for scheduling if more than two tasks exist in one core.

    # Enable CONFIG_NO_HZ_FULL\n-> General setup\n-> Timers subsystem\n-> Timer tick handling (Full dynticks system (tickless))\n(X) Full dynticks system (tickless)\n\n# Allows RCU callback processing to be offloaded from selected CPUs\n# (CONFIG_RCU_NOCB_CPU=y)\n-> General setup\n-> RCU Subsystem\n-*- Offload RCU callback processing from boot-selected CPUs\n

    Additionally, the kernel boot parameters need to be set as follows.

    GRUB_CMDLINE_LINUX_DEFAULT=\n  \"... isolcpus=2,8 rcu_nocbs=2,8 rcu_nocb_poll nohz_full=2,8 intel_pstate=disable\u201d\n

    In the above configuration, for example, the node to be measured is assumed to run on core 2, and core 8, which is a hyper-threading pair, is also being isolated. Appropriate decisions on which cores to run the measurement target and which nodes to isolate need to be made based on the cache and core layout of the measurement machine. You can easily check if it is properly configured by running cat /proc/softirqs. Since intel_pstate=disable is specified in the kernel boot parameter, userspace can be specified in the scaling governor.

    cat /sys/devices/system/cpu/cpu2/cpufreq/scaling_governor // ondemand\nsudo sh -c \"echo userspace > /sys/devices/system/cpu/cpu2/cpufreq/scaling_governor\"\n

    This allows you to freely set the desired frequency within a defined range.

    sudo sh -c \"echo <freq(kz)> > /sys/devices/system/cpu/cpu2/cpufreq/scaling_setspeed\"\n

    Turbo Boost needs to be switched off on Intel CPUs, which is often overlooked.

    sudo sh -c \"echo 0 > /sys/devices/system/cpu/cpufreq/boost\"\n
    "},{"location":"how-to-guides/integrating-autoware/tuning-parameters-and-performance/evaluating-real-time-performance/#run-single-node-separately","title":"Run single node separately","text":"

    Following the instructions in the ros2_single_node_replayer README, start the node and play the dedicated rosbag created by the tool. Before playing the rosbag, appropriately set the CPU affinity of the thread on which the node runs, so it is placed on the isolated core prepared.

    taskset --cpu-list -p <target cpu> <pid>\n

    To avoid interference in the last level cache, minimize the number of other applications running during the measurement.

    "},{"location":"how-to-guides/integrating-autoware/tuning-parameters-and-performance/evaluating-real-time-performance/#measurement-and-visualization","title":"Measurement and visualization","text":"

    To visualize the performance of the measurement target, embed code for logging timestamps and performance counter values in the target source code. To achieve this objective, a tool called pmu_analyzer has been prepared.

    Details on how to use the tool can be found in the README. This tool can measure the turnaround time of any section in the source code, as well as various performance counters.

    "},{"location":"how-to-guides/integrating-autoware/tuning-parameters-and-performance/evaluating-real-time-performance/#case-studies","title":"Case studies","text":"

    In this section, we will present several case studies that demonstrate the performance improvements. These examples not only showcase our commitment to enhancing the system's efficiency but also serve as a valuable resource for developers who may face similar challenges in their own projects. The performance improvements discussed here span various components of the Autoware system, including sensing modules and planning modules. There are tendencies for each component regarding which points are becoming bottlenecks. By examining the methods, techniques, and tools employed in these case studies, readers can gain a better understanding of the practical aspects of optimizing complex software systems like Autoware.

    "},{"location":"how-to-guides/integrating-autoware/tuning-parameters-and-performance/evaluating-real-time-performance/#sensing-component","title":"Sensing component","text":"

    First, we will explain the procedure for performance improvement, taking the node ring_outlier_filter as an example. Refer to the Pull Request for details.

    The following figure is a time-series plot of the turnaround time of the main processing part of ring_outlier_filter, analyzed as described in the \"Performance Measurement\" section above.

    The horizontal axis indicates the number of callbacks called (i.e., callback index), and the vertical axis indicates the turnaround time.

    When analyzing the performance of the sensing module from the viewpoint of performance counter, pay attention to instructions, LLC-load-misses, LLC-store-misses, cache-misses, and minor-faults.

    Analysis of the performance counter shows that the largest fluctuations come from minor-faults (i.e., soft page faults), the second largest from LLC-store-misses and LLC-load-misses (i.e., cache misses in the last level cache), and the slowest fluctuations come from instructions (i.e., message data size fluctuations). For example, when we plot minor-faults on the horizontal axis and turnaround time on the vertical axis, we can see the following dominant proportional relationship.

    To achieve zero soft page faults, heap allocations must only be made from areas that have been first touched in advance. We have developed a library called heaphook to avoid soft page faults while running Autoware callback. If you are interested, refer to the GitHub discussion and the issue.

    To reduce LLC misses, it is necessary to reduce the working set and to use cache-efficient access patterns.

    In the sensing component, which handles large message data such as LiDAR point cloud data, minimizing copying is important. A callback that takes sensor data message types as input and output should be written in an in-place algorithm as much as possible. This means that in the following pseudocode, when generating output_msg from input_msg, it is crucial to avoid using buffers as much as possible to reduce the number of memory copies.

    void callback(const PointCloudMsg &input_msg) {\nauto output_msg = allocate_msg<PointCloudMsg>(output_size);\nfill(input_msg, output_msg);\npublish(std::move(output_msg));\n}\n

    To improve cache efficiency, implement an in-place style as much as possible, instead of touching memory areas sporadically. In ROS applications using PCL, the code shown below is often seen.

    void callback(const sensor_msgs::PointCloud2ConstPtr &input_msg) {\npcl::PointCloud<PointT>::Ptr input_pcl(new pcl::PointCloud<PointT>);\npcl::fromROSMsg(*input_msg, *input_pcl);\n\n// Algorithm is described for point cloud type of pcl\npcl::PointCloud<PointT>::Ptr output_pcl(new pcl::PointCloud<PointT>);\nfill_pcl(*input_pcl, *output_pcl);\n\nauto output_msg = allocate_msg<sensor_msgs::PointCloud2>(output_size);\npcl::toROSMsg(*output_pcl, *output_msg);\npublish(std::move(output_msg));\n}\n

    To use the PCL library, fromROSMsg() and toROSMsg() are used to perform message type conversion at the beginning and end of the callback. This is a wasteful copying process and should be avoided. We should eliminate unnecessary type conversions by removing dependencies on PCL (e.g., https://github.com/tier4/velodyne_vls/pull/39). For large message types such as map data, there should be only one instance in the entire system in terms of physical memory.

    "},{"location":"how-to-guides/integrating-autoware/tuning-parameters-and-performance/evaluating-real-time-performance/#planning-component","title":"Planning component","text":"

    First, we will pick up detection_area module in behavior_velocity_planner node, which tends to have long turnaround time. We have followed the performance analysis steps above to obtain the following graph. Axises are the same as the graphs in the sensing case study.

    Using pmu_analyzer tool to further identify the bottleneck, we have found that the following multiple loops were taking up a lot of processing time:

    for ( area : detection_areas )\nfor ( point : point_clouds )\nif ( boost::geometry::within(point, area) )\n// do something with O(1)\n

    It checks whether each point cloud is contained in each detection area. Let N be the size of point_clouds and M be the size of detection_areas, then the computational complexity of this program is O(N^2 * M), since the complexity of within is O(N). Here, given that most of the point clouds are located far away from a certain detection area, a certain optimization can be achieved. First, calculate the minimum enclosing circle that completely covers the detection area, and then check whether the points are contained in that circle. Most of the point clouds can be quickly ruled out by this method, we don\u2019t have to call the within function in most cases. Below is the pseudocode after optimization.

    for ( area : detection_areas )\ncircle = calc_minimum_enclosing_circle(area)\nfor ( point : point_clouds )\nif ( point is in circle )\nif ( boost::geometry::within(point, area) )\n// do something with O(1)\n

    By using O(N) algorithm for minimum enclosing circle, the computational complexity of this program is reduced to almost O(N * (N + M)) (note that the exact computational complexity does not really change). If you are interested, refer to the Pull Request.

    Similar to this example, in the planning component, we take into consideration thousands to tens of thousands of point clouds, thousands of points in a path representing our own route, and polygons representing obstacles and detection areas in the surroundings, and we repeatedly create paths based on them. Therefore, we access the contents of the point clouds and paths multiple times using for-loops. In most cases, the bottleneck lies in these naive for-loops. Here, understanding Big O notation and reducing the order of computational complexity directly leads to performance improvements.

    "},{"location":"how-to-guides/others/add-a-custom-ros-message/","title":"Add a custom ROS message","text":""},{"location":"how-to-guides/others/add-a-custom-ros-message/#add-a-custom-ros-message","title":"Add a custom ROS message","text":""},{"location":"how-to-guides/others/add-a-custom-ros-message/#overview","title":"Overview","text":"

    During the Autoware development, you will probably need to define your own messages. Read the following instructions before adding a custom message.

    1. Message in autoware_msgs define interfaces of Autoware Core.

      • If a contributor wishes to make changes or add new messages to autoware_msgs, they should first create a new discussion post under the Design category.
    2. Any other minor or proposal messages used for internal communication within a component(such as planning) should be defined in another repository.

      • tier4_autoware_msgs is an example of that.

    The following is a simple tutorial of adding a message package to autoware_msgs. For the general ROS 2 tutorial, see Create custom msg and srv files.

    "},{"location":"how-to-guides/others/add-a-custom-ros-message/#how-to-create-custom-message","title":"How to create custom message","text":"

    Make sure you are in the Autoware workspace, and then run the following command to create a new package. As an example, let's create a package to define sensor messages.

    1. Create a package

      cd ./src/core/autoware_msgs\nros2 pkg create --build-type ament_cmake autoware_sensing_msgs\n
    2. Create custom messages

      You should create .msg files and place them in the msg directory.

      NOTE: The initial letters of the .msg and .srv files must be capitalized.

      As an example, let's make .msg files GnssInsOrientation.msg and GnssInsOrientationStamped.msg to define GNSS/INS orientation messages:

      mkdir msg\ncd msg\ntouch GnssInsOrientation.msg\ntouch GnssInsOrientationStamped.msg\n

      Edit GnssInsOrientation.msg with your editor to be the following content:

      geometry_msgs/Quaternion orientation\nfloat32 rmse_rotation_x\nfloat32 rmse_rotation_y\nfloat32 rmse_rotation_z\n

      In this case, the custom message uses a message from another message package geometry_msgs/Quaternion.

      Edit GnssInsOrientationStamped.msg with your editor to be the following content:

      std_msgs/Header header\nGnssInsOrientation orientation\n

      In this case, the custom message uses a message from another message package std_msgs/Header.

    3. Edit CMakeLists.txt

      In order to use this custom message in C++ or Python languages, we need to add the following lines to CMakeList.txt:

      rosidl_generate_interfaces(${PROJECT_NAME}\n\"msg/GnssInsOrientation.msg\"\n\"msg/GnssInsOrientationStamped.msg\"\nDEPENDENCIES\ngeometry_msgs\nstd_msgs\nADD_LINTER_TESTS\n)\n

      The ament_cmake_auto tool is very useful and is more widely used in Autoware, so we recommend using ament_cmake_auto instead of ament_cmake.

      We need to replace

      find_package(ament_cmake REQUIRED)\n\nament_package()\n

      with

      find_package(ament_cmake_auto REQUIRED)\n\nament_auto_package()\n
    4. Edit package.xml

      We need to declare relevant dependencies in package.xml. For the above example we need to add the following content:

      <buildtool_depend>rosidl_default_generators</buildtool_depend>\n\n<exec_depend>rosidl_default_runtime</exec_depend>\n\n<depend>geometry_msgs</depend>\n<depend>std_msgs</depend>\n\n<member_of_group>rosidl_interface_packages</member_of_group>\n

      We need to replace <buildtool_depend>ament_cmake</buildtool_depend> with <buildtool_depend>ament_cmake_auto</buildtool_depend> in the package.xml file.

    5. Build the custom message package

      You can build the package in the root of your workspace, for example by running the following command:

      colcon build --packages-select autoware_sensing_msgs\n

      Now the GnssInsOrientationStamped message will be discoverable by other packages in Autoware.

    "},{"location":"how-to-guides/others/add-a-custom-ros-message/#how-to-use-custom-messages-in-autoware","title":"How to use custom messages in Autoware","text":"

    You can use the custom messages in Autoware by following these steps:

    • Add dependency in package.xml.
      • For example, <depend>autoware_sensing_msgs</depend>.
    • Include the .hpp file of the relevant message in the code.
      • For example, #include <autoware_sensing_msgs/msg/gnss_ins_orientation_stamped.hpp>.
    "},{"location":"how-to-guides/others/advanced-usage-of-colcon/","title":"Advanced usage of colcon","text":""},{"location":"how-to-guides/others/advanced-usage-of-colcon/#advanced-usage-of-colcon","title":"Advanced usage of colcon","text":"

    This page shows some advanced and useful usage of colcon. If you need more detailed information, refer to the colcon documentation.

    "},{"location":"how-to-guides/others/advanced-usage-of-colcon/#common-mistakes","title":"Common mistakes","text":""},{"location":"how-to-guides/others/advanced-usage-of-colcon/#do-not-run-from-other-than-the-workspace-root","title":"Do not run from other than the workspace root","text":"

    It is important that you always run colcon build from the workspace root because colcon builds only under the current directory. If you have mistakenly built in a wrong directory, run rm -rf build/ install/ log/ to clean the generated files.

    "},{"location":"how-to-guides/others/advanced-usage-of-colcon/#do-not-unnecessarily-overlay-workspaces","title":"Do not unnecessarily overlay workspaces","text":"

    colcon overlays workspaces if you have sourced the setup.bash of other workspaces before building a workspace. You should take care of this especially when you have multiple workspaces.

    Run echo $COLCON_PREFIX_PATH to check whether workspaces are overlaid. If you find some workspaces are unnecessarily overlaid, remove all built files, restart the terminal to clean environment variables, and re-build the workspace.

    For more details about workspace overlaying, refer to the ROS 2 documentation.

    "},{"location":"how-to-guides/others/advanced-usage-of-colcon/#cleaning-up-the-build-artifacts","title":"Cleaning up the build artifacts","text":"

    colcon sometimes causes errors of because of the old cache. To remove the cache and rebuild the workspace, run the following command:

    rm -rf build/ install/\n

    In case you know what packages to remove:

    rm -rf {build,install}/{package_a,package_b}\n
    "},{"location":"how-to-guides/others/advanced-usage-of-colcon/#selecting-packages-to-build","title":"Selecting packages to build","text":"

    To just build specified packages:

    colcon build --packages-select <package_name1> <package_name2> ...\n

    To build specified packages and their dependencies recursively:

    colcon build --packages-up-to <package_name1> <package_name2> ...\n

    You can also use these options for colcon test.

    "},{"location":"how-to-guides/others/advanced-usage-of-colcon/#changing-the-optimization-level","title":"Changing the optimization level","text":"

    Set DCMAKE_BUILD_TYPE to change the optimization level.

    Warning

    If you specify DCMAKE_BUILD_TYPE=Debug or no DCMAKE_BUILD_TYPE is given for building the entire Autoware, it may be too slow to use.

    colcon build --cmake-args -DCMAKE_BUILD_TYPE=Debug\n
    colcon build --cmake-args -DCMAKE_BUILD_TYPE=RelWithDebInfo\n
    colcon build --cmake-args -DCMAKE_BUILD_TYPE=Release\n
    "},{"location":"how-to-guides/others/advanced-usage-of-colcon/#changing-the-default-configuration-of-colcon","title":"Changing the default configuration of colcon","text":"

    Create $COLCON_HOME/defaults.yaml to change the default configuration.

    mkdir -p ~/.colcon\ncat << EOS > ~/.colcon/defaults.yaml\n{\n\"build\": {\n\"symlink-install\": true\n}\n}\n

    For more details, see here.

    "},{"location":"how-to-guides/others/advanced-usage-of-colcon/#generating-compile_commandsjson","title":"Generating compile_commands.json","text":"

    compile_commands.json is used by IDEs/tools to analyze the build dependencies and symbol relationships.

    You can generate it with the flag DCMAKE_EXPORT_COMPILE_COMMANDS=1:

    colcon build --cmake-args -DCMAKE_EXPORT_COMPILE_COMMANDS=1\n
    "},{"location":"how-to-guides/others/advanced-usage-of-colcon/#seeing-compiler-commands","title":"Seeing compiler commands","text":"

    To see the compiler and linker invocations for a package, use VERBOSE=1 and --event-handlers console_cohesion+:

    VERBOSE=1 colcon build --packages-up-to <package_name> --event-handlers console_cohesion+\n

    For other options, see here.

    "},{"location":"how-to-guides/others/advanced-usage-of-colcon/#using-ccache-to-speed-up-recompilation","title":"Using Ccache to speed up recompilation","text":"

    Ccache is a compiler cache that can significantly speed up recompilation by caching previous compilations and reusing them when the same compilation is being done again. It's highly recommended for developers looking to optimize their build times, unless there's a specific reason to avoid it.

    "},{"location":"how-to-guides/others/advanced-usage-of-colcon/#step-1-install-ccache","title":"Step 1: Install Ccache","text":"
    sudo apt update && sudo apt install ccache\n
    "},{"location":"how-to-guides/others/advanced-usage-of-colcon/#step-2-configure-ccache","title":"Step 2: Configure Ccache","text":"
    1. Create the Ccache configuration folder and file:

      mkdir -p ~/.cache/ccache\ntouch ~/.cache/ccache/ccache.conf\n
    2. Set the maximum cache size. The default size is 5GB, but you can increase it depending on your needs. Here, we're setting it to 60GB:

      echo \"max_size = 60G\" >> ~/.cache/ccache/ccache.conf\n
    "},{"location":"how-to-guides/others/advanced-usage-of-colcon/#step-3-integrate-ccache-with-your-environment","title":"Step 3: Integrate Ccache with Your Environment","text":"

    To ensure Ccache is used for compilation, add the following lines to your .bashrc file. This will redirect GCC and G++ calls through Ccache.

    export CC=\"/usr/lib/ccache/gcc\"\nexport CXX=\"/usr/lib/ccache/g++\"\nexport CCACHE_DIR=\"$HOME/.cache/ccache/\"\n

    After adding these lines, reload your .bashrc or restart your terminal session to apply the changes.

    "},{"location":"how-to-guides/others/advanced-usage-of-colcon/#step-4-verify-ccache-is-working","title":"Step 4: Verify Ccache is Working","text":"

    To confirm Ccache is correctly set up and being used, you can check the statistics of cache usage:

    ccache -s\n

    This command displays the cache hit rate and other relevant statistics, helping you gauge the effectiveness of Ccache in your development workflow.

    "},{"location":"how-to-guides/others/an-example-procedure-for-adding-and-evaluating-a-new-node/","title":"An example procedure for adding and evaluating a new node","text":""},{"location":"how-to-guides/others/an-example-procedure-for-adding-and-evaluating-a-new-node/#an-example-procedure-for-adding-and-evaluating-a-new-node","title":"An example procedure for adding and evaluating a new node","text":""},{"location":"how-to-guides/others/an-example-procedure-for-adding-and-evaluating-a-new-node/#overview","title":"Overview","text":"

    This page provides a guide for evaluating Autoware when a new node is implemented, especially about developing a novel localization node.

    The workflow involves initial testing and rosbag recording using a real vehicle or AWSIM, implementing the new node, subsequent testing using the recorded rosbag, and finally evaluating with a real vehicle or AWSIM.

    It is assumed that the method intended for addition has already been verified well with public datasets and so on.

    "},{"location":"how-to-guides/others/an-example-procedure-for-adding-and-evaluating-a-new-node/#1-running-autoware-in-its-standard-configuration","title":"1. Running Autoware in its standard configuration","text":"

    First of all, it is important to be able to run the standard Autoware to establish a basis for performance and behavior comparison.

    Autoware constantly incorporates new features. It is crucial to initially confirm that it operates as expected with the current version, which helps in problem troubleshooting.

    In this context, AWSIM is presumed. Therefore, AWSIM simulator can be useful. If you are using actual hardware, please refer to the How-to guides.

    "},{"location":"how-to-guides/others/an-example-procedure-for-adding-and-evaluating-a-new-node/#2-recording-a-rosbag-using-autoware","title":"2. Recording a rosbag using Autoware","text":"

    Before developing a new node, it is recommended to record a rosbag in order to evaluate. If you need a new sensor, you should add it to your vehicle or AWSIM.

    In this case, it is recommended to save all topics regardless of whether they are necessary or not. For example, in Localization, since the initial position estimation service is triggered by the input to rviz and the GNSS topic, the initial position estimation does not start when playing back data unless those topics are saved.

    Consider the use of the mcap format if data capacity becomes a concern.

    It is worth noting that using ros2 bag record increases computational load and might affect performance. After data recording, verifying the smooth flow of sensor data and unchanged time series is advised. This verification can be accomplished, for example, by inspecting the image data with rqt_image_view during ros2 bag play.

    "},{"location":"how-to-guides/others/an-example-procedure-for-adding-and-evaluating-a-new-node/#3-developing-the-new-node","title":"3. Developing the new node","text":"

    When developing a new node, it could be beneficial to reference a package that is similar to the one you intend to create.

    It is advisable to thoroughly read the Design page, contemplate the addition or replacement of nodes in Autoware, and then implement your solution.

    For example, a node doing NDT, a LiDAR-based localization method, is ndt_scan_matcher. If you want to replace this with a different approach, implement a node which produces the same topics and provides the same services.

    ndt_scan_matcher is launched as pose_estimator, so it is necessary to replace the launch file as well.

    "},{"location":"how-to-guides/others/an-example-procedure-for-adding-and-evaluating-a-new-node/#4-evaluating-by-a-rosbag-based-simulator","title":"4. Evaluating by a rosbag-based simulator","text":"

    Once the new node is implemented, it is time to evaluate it. logging_simulator is a tool of how to evaluate the new node using the rosbag captured in step 2.

    When you run the logging_simulator, you can set planning:=false or control:=false to disable the launch of specific component nodes.

    ros2 launch autoware_launch logging_simulator.launch.xml ... planning:=false control:=false

    After launching logging_simulator, the rosbag file obtained in step 2 should be replayed using ros2 bag play <rosbag_file>.

    If you remap the topics related to the localization that you want to verify this time, Autoware will use the data it is calculating this time instead of the data it recorded. Also, using the --topics option of ros2 bag play, you can publish only specific topics in rosbag.

    There is ros2bag_extensions available to filter the rosbag file and create a new rosbag file that contains only the topics you need.

    "},{"location":"how-to-guides/others/an-example-procedure-for-adding-and-evaluating-a-new-node/#5-evaluating-in-a-realtime-environment","title":"5. Evaluating in a realtime environment","text":"

    Once you have sufficiently verified the behavior in the logging_simulator, let's run it as Autoware with new nodes added in the realtime environment.

    To debug Autoware, the method described at debug-autoware is useful.

    For reproducibility, you may want to fix the GoalPose. In such cases, consider using the tier4_automatic_goal_rviz_plugin.

    "},{"location":"how-to-guides/others/an-example-procedure-for-adding-and-evaluating-a-new-node/#6-sharing-the-results","title":"6. Sharing the results","text":"

    If your implementation works successfully, please consider a pull request to Autoware.

    It is also a good idea to start by presenting your ideas in Discussion at Show and tell.

    For localization, YabLoc's Proposal may provide valuable insights.

    "},{"location":"how-to-guides/others/applying-clang-tidy-to-ros-packages/","title":"Applying Clang-Tidy to ROS packages","text":""},{"location":"how-to-guides/others/applying-clang-tidy-to-ros-packages/#applying-clang-tidy-to-ros-packages","title":"Applying Clang-Tidy to ROS packages","text":"

    Clang-Tidy is a powerful C++ linter.

    "},{"location":"how-to-guides/others/applying-clang-tidy-to-ros-packages/#preparation","title":"Preparation","text":"

    You need to generate build/compile_commands.json before using Clang-Tidy.

    colcon build --cmake-args -DCMAKE_EXPORT_COMPILE_COMMANDS=1\n
    "},{"location":"how-to-guides/others/applying-clang-tidy-to-ros-packages/#usage","title":"Usage","text":"
    clang-tidy -p build/ path/to/file1 path/to/file2 ...\n

    If you want to apply Clang-Tidy to all files in a package, using the fd command is useful. To install fd, see the installation manual.

    clang-tidy -p build/ $(fd -e cpp -e hpp --full-path \"/autoware_utils/\")\n
    "},{"location":"how-to-guides/others/applying-clang-tidy-to-ros-packages/#ide-integration","title":"IDE integration","text":""},{"location":"how-to-guides/others/applying-clang-tidy-to-ros-packages/#clion","title":"CLion","text":"

    Refer to the CLion Documentation.

    "},{"location":"how-to-guides/others/applying-clang-tidy-to-ros-packages/#visual-studio-code","title":"Visual Studio Code","text":"

    Use either one of the following extensions:

    • C/C++
    • clangd
    "},{"location":"how-to-guides/others/applying-clang-tidy-to-ros-packages/#troubleshooting","title":"Troubleshooting","text":"

    If you encounter clang-diagnostic-error, try installing libomp-dev.

    Related: https://github.com/autowarefoundation/autoware-github-actions/pull/172

    "},{"location":"how-to-guides/others/debug-autoware/","title":"Debug Autoware","text":""},{"location":"how-to-guides/others/debug-autoware/#debug-autoware","title":"Debug Autoware","text":"

    This page provides some methods for debugging Autoware.

    "},{"location":"how-to-guides/others/debug-autoware/#print-debug-messages","title":"Print debug messages","text":"

    The essential thing for debug is to print the program information clearly, which can quickly judge the program operation and locate the problem. Autoware uses ROS 2 logging tool to print debug messages, how to design console logging refer to tutorial Console logging.

    "},{"location":"how-to-guides/others/debug-autoware/#using-ros-tools-debug-autoware","title":"Using ROS tools debug Autoware","text":""},{"location":"how-to-guides/others/debug-autoware/#using-command-line-tools","title":"Using command line tools","text":"

    ROS 2 includes a suite of command-line tools for introspecting a ROS 2 system. The main entry point for the tools is the command ros2, which itself has various sub-commands for introspecting and working with nodes, topics, services, and more. How to use the ROS 2 command line tool refer to tutorial CLI tools.

    "},{"location":"how-to-guides/others/debug-autoware/#using-rviz2","title":"Using rviz2","text":"

    Rviz2 is a port of Rviz to ROS 2. It provides a graphical interface for users to view their robot, sensor data, maps, and more. You can run Rviz2 tool easily by:

    rviz2\n

    When Autoware launch the simulators, the Rviz2 tool is opened by default to visualize the autopilot graphic information.

    "},{"location":"how-to-guides/others/debug-autoware/#using-rqt-tools","title":"Using rqt tools","text":"

    RQt is a graphical user interface framework that implements various tools and interfaces in the form of plugins. You can run any RQt tools/plugins easily by:

    rqt\n

    This GUI allows you to choose any available plugins on your system. You can also run plugins in standalone windows. For example, RQt Console:

    ros2 run rqt_console rqt_console\n
    "},{"location":"how-to-guides/others/debug-autoware/#common-rqt-tools","title":"Common RQt tools","text":"
    1. rqt_graph: view node interaction

      In complex applications, it may be helpful to get a visual representation of the ROS node interactions.

      ros2 run rqt_graph rqt_graph\n
    2. rqt_console: view messages

      rqt_console is a great gui for viewing ROS topics.

      ros2 run rqt_console rqt_console\n
    3. rqt_plot: view data plots

      rqt_plot is an easy way to plot ROS data in real time.

      ros2 run rqt_plot rqt_plot\n
    "},{"location":"how-to-guides/others/debug-autoware/#using-ros2_graph","title":"Using ros2_graph","text":"

    ros2_graph can be used to generate mermaid description of ROS 2 graphs to add on your markdown files.

    It can also be used as a colorful alternative to rqt_graph even though it would require some tool to render the generated mermaid diagram.

    It can be installed with:

    pip install ros2-graph\n

    Then you can generate a mermaid description of the graph with:

    ros2_graph your_node\n\n# or like with an output file\nros2_graph /turtlesim -o turtle_diagram.md\n\n# or multiple nodes\nros2_graph /turtlesim /teleop_turtle\n

    You can then visualize these graphs with:

    • Mermaid Live Editor
    • Visual Studio Code extension mermaid preview
    • JetBrains IDEs with native support
    "},{"location":"how-to-guides/others/debug-autoware/#using-ros2doctor","title":"Using ros2doctor","text":"

    When your ROS 2 setup is not running as expected, you can check its settings with the ros2doctor tool.

    ros2doctor checks all aspects of ROS 2, including platform, version, network, environment, running systems and more, and warns you about possible errors and reasons for issues.

    It's as simple as just running ros2 doctor in your terminal.

    It has the ability to list \"Subscribers without publishers\" for all topics in the system.

    And this information can help you find if a necessary node isn't running.

    For more details, see the following official documentation for Using ros2doctor to identify issues.

    "},{"location":"how-to-guides/others/debug-autoware/#using-a-debugger-with-breakpoints","title":"Using a debugger with breakpoints","text":"

    Many IDE(e.g. Visual Studio Code, CLion) supports debugging C/C++ executable with GBD on linux platform. The following lists some references for using the debugger:

    • https://code.visualstudio.com/docs/cpp/cpp-debug
    • https://www.jetbrains.com/help/clion/debugging-code.html#useful-debugger-shortcuts
    "},{"location":"how-to-guides/others/defining-temporal-performance-metrics/","title":"Defining temporal performance metrics on components","text":""},{"location":"how-to-guides/others/defining-temporal-performance-metrics/#defining-temporal-performance-metrics-on-components","title":"Defining temporal performance metrics on components","text":""},{"location":"how-to-guides/others/defining-temporal-performance-metrics/#motivation-to-defining-temporal-performance-metrics","title":"Motivation to defining temporal performance metrics","text":""},{"location":"how-to-guides/others/defining-temporal-performance-metrics/#objective-of-the-page","title":"Objective of the page","text":"

    This page introduces policies to define metrics to evaluate temporal performance on components of Autoware. The term \"temporal performance\" is often used throughout the page in order to distinguish between functional performance, which referred to as accuracy as well, and time-related performance.

    It is expected that most algorithms employed for Autoware are executed with as high frequency and short response time as possible. In order to achieve safe autonomous driving, one of the desired outcomes is no time gap between perceived and actual situation. The time gap is commonly referred to as delay. If the delay is significant, the system may determine trajectory and maneuver based on outdated situation. Consequently, if the actual situation differs from the perceived one due to the delay, the system may make unexpected decisions.

    As mentioned above, this page presents the policies to define metrics. Besides, the page contains lists of sample metrics that are crucial for the main functionalities of Autoware: Localization, Perception, Planning, and Control.

    Note

    Other functionalities, such as system components for diagnosis, are excluded currently. However they will be taken into account in the near future.

    "},{"location":"how-to-guides/others/defining-temporal-performance-metrics/#contribution-of-the-temporal-performance-metrics","title":"Contribution of the temporal performance metrics","text":"

    Temporal performance metrics are important for evaluating Autoware. These metrics are particularly useful for assessing delays caused by new algorithms and logic. They can be employed when comparing the temporal performance of software on a desktop computer with that on a vehicle during the vehicle integration phase.

    In addition, these metrics are useful for designers and evaluators of middleware, operating systems, and computers. They are selected based on user and product requirements. One of these requirements is to provide sufficient temporal performance for executing Autoware. \"Sufficient temporal performance\" is defined as a temporal performance requirement, but it can be challenging to define the requirement because it varies depending on the product type, Operational Design Domain (ODD), and other factors. Then, this page specifically focuses on temporal performance metrics rather than requirements.

    Temporal performance metrics are important for evaluating the reliability of Autoware. However, ensuring the reliability of Autoware requires consideration of not only temporal performance metrics but also other metrics.

    "},{"location":"how-to-guides/others/defining-temporal-performance-metrics/#tools-for-evaluating-the-metrics","title":"Tools for evaluating the metrics","text":"

    There are several tools available for evaluating Autoware according to the metrics listed in the page. For example, both CARET and ros2_tracing are recommended options when evaluating Autoware on Linux and ROS 2. If you want to measure the metrics with either of these tools, refer to the corresponding user guide for instructions. It's important to note that if you import Autoware to a platform other than Linux and ROS 2, you will need to choose a supported tool for evaluation.

    Note

    TIER IV plans to measure Autoware, which is running according to the tutorial, and provide a performance evaluation report periodically. An example of such a report can be found here, although it may not include all of the metrics listed.

    The page does not aim to provide instructions on how to use these tools or measure the metrics. Its primary focus is on the metrics themselves, as they are more important than the specific tools used. These metrics retain their relevance regardless of the employed platform.

    "},{"location":"how-to-guides/others/defining-temporal-performance-metrics/#policies-to-define-temporal-performance-metrics","title":"Policies to define temporal performance metrics","text":"

    As mentioned above, the configuration of Autoware varies by the product type, ODD, and other factors. The variety of configurations makes it difficult to define the uniform metrics for evaluating Autoware. However, the policies used to define them are basically reused even when the configuration changes. Each of temporal performance metrics is categorized into two types: execution frequency and response time. Although there are many types of metrics, such as communication latency, the only two types are considered for simplicity. Execution frequency is observed using rate of Inter-Process Communication (IPC) messages. You will find an enormous number of messages in Autoware, but you don't have to take care of all. Some messages might be critical to functionality and they should be chosen for evaluation. Response time is duration elapsed through a series of processing. A series of processing is referred to as a path. Response time is calculated from timestamps of start and end of a path. Although many paths can be defined in Autoware, you have to choose significant paths.

    As a hint, here are some characteristics of message and path in order to choose metrics.

    1. Messages and paths on boundaries where observed values from sensors are consumed
    2. Messages and paths on boundaries of functions, e.g., a boundary of perception and planning
    3. Messages and paths on boundaries where timer-based frequency is switched
    4. Messages and paths on boundaries where two different messages are synchronized and merged
    5. Messages that must be transmitted at expected frequency, e.g., vehicle command messages

    Those hints would be helpful for most configurations but there may be exclusions. Defining metrics precisely requires an understanding of configuration.

    In addition, it is recommended that metrics be determined incrementally from the architectural level to the detailed design and implementation level. Mixing metrics at different levels of granularity can be confusing.

    "},{"location":"how-to-guides/others/defining-temporal-performance-metrics/#list-of-sample-metrics","title":"List of sample metrics","text":"

    This section demonstrates how to define metrics according to the policies explained and has lists of the metrics for Autoware launched according to the tutorial. The section is divided into multiple subsections, each containing a model diagram and an accompanying list that explains the important temporal performance metrics. Each model is equipped with checkpoints that serve as indicators for these metrics.

    The first subsection presents the top-level temporal performance metrics, which are depicted in the abstract structure of Autoware as a whole. The detailed metrics are not included in the model as they would add complexity to it. Instead, the subsequent section introduces the detailed metrics. The detailed metrics are subject to more frequent updates compared to the top-level ones, which is another reason for categorizing them separately.

    Each list includes a column for the reference value. The reference value represents the observed value of each metric when Autoware is running according to the tutorial. It is important to note that the reference value is not a required value, meaning that Autoware does not necessarily fail in the tutorial execution if certain metrics do not fulfill the reference value.

    "},{"location":"how-to-guides/others/defining-temporal-performance-metrics/#top-level-temporal-performance-metrics-for-autoware","title":"Top-level temporal performance metrics for Autoware","text":"

    The diagram below introduces the model for top-level temporal performance metrics.

    The following three policies assist in selecting the top-level performance metrics:

    • Splitting Autoware based on components that consume observed values, such as sensor data, and considering the processing frequency and response time around these components
    • Dividing Autoware based on the entry point of Planning and Control and considering the processing frequency and response time around these components
    • Showing the minimum metrics for the Vehicle Interface, as they may vary depending on the target vehicle

    Additionally, it is assumed that algorithms are implemented as multiple nodes and function as a pipeline processing system.

    ID Representation in the model Metric meaning Related functionality Reference value Reason to choose it as a metric Note AWOV-001 Message rate from CPA #9 to CPA #18 Update rate of result from Prediction to Planning. Perception 10 Hz Planning relies on fresh and up-to-date perceived data from Perception for creating accurate trajectory. AWOV-002 Response time from CPA #0 to CPA #20 via CPA #18 Response time in main body of Perception. Perception N/A Planning relies on fresh and up-to-date perceived data from Perception for creating accurate trajectory. The metric is used if delay compensation is disabled in Tracking. AWOV-003 Response time from CPA #7 to CPA #20 Response time from Tracking output of Tracking to its data consumption in Planning. Perception N/A Planning relies on fresh and up-to-date perceived data from Perception for creating accurate trajectory. The metric is used if delay compensation is enabled in Tracking. AWOV-004 Response time from CPA #0 to CPA #6 Duration to process pointcloud data in Sensing and Detection. Perception N/A Tracking relies on detection to provide real-time and up-to-date sensed data for accurate tracking. The metric is used if delay compensation is enabled in Tracking. AWOV-005 Message rate from CPA #4 to CPA #5 Update rate of Detection result received by Tracking. Perception 10 Hz Tracking relies on detection to provide real-time and up-to-date sensed data for accurate tracking. AWOV-006 Response time from CPA #0 to CPA #14 Response time from output of observed data from LiDARs to its consumption in EKF Localizer via NDT Scan Matcher. Localization N/A EKF Localizer relies on fresh and up-to-date observed data from sensors for accurate estimation of self pose. AWOV-007 Message rate from CPA #11 to CPA #13 Update rate of pose estimated by NDT Scan Matcher. Localization 10 Hz EKF Localizer relies on fresh and up-to-date observed data from sensors for accurate estimation of self pose. AWOV-008 Message rate from CPA #15 to CPA #12 Update rate of feed backed pose estimated by EKF Localizer. Localization 50 Hz NDT Scan Matcher relies on receiving estimated pose from EKF Localizer smoothly for linear interpolation. AWOV-009 Message rate from CPA #17 to CPA #19 Update rate of Localization result received by Planning. Localization 50 Hz Planning relies on Localization to update the estimated pose frequently. AWOV-010 Response time from CPA #20 to CPA #23 Processing time from beginning of Planning to consumption of Trajectory message in Control. Planning N/A A vehicle relies on Planning to update trajectory within a short time frame to achieve safe driving behavior. AWOV-011 Message rate from CPA #21 to CPA #22 Update rate of Trajectory message from Planning. Planning 10 Hz A vehicle relies on Planning to update trajectory frequently to achieve safe driving behavior. AWOV-012 Message rate from CPA #24 to CPA #25 Update rate of Control command. Control 33 Hz Control stability and comfort relies on sampling frequency of Control. AWOV-013 Message rate between CPA #26 and Vehicle Communication rate between Autoware and Vehicle. Vehicle Interface N/A A vehicle requires Autoware to communicate with each other at predetermined frequency. Temporal performance requirement varies depending on vehicle type.

    Note

    There is an assumption that each of sensors, such as LiDARs and cameras, outputs a set of pointcloud with a timestamp. CPA #0 is observed with the timestamp. If the sensors are not configured to output the timestamp, the time when Autoware receives the pointcloud is used instead. That is represented by CPA #1 in the model. The detailed metrics employs the idea as well.

    "},{"location":"how-to-guides/others/defining-temporal-performance-metrics/#detailed-temporal-performance-metrics-for-perception","title":"Detailed temporal performance metrics for Perception","text":"

    The diagram below introduces the model for temporal performance metrics for Perception.

    The following two policies assist in selecting the performance metrics:

    • Regarding the frequency and response time at which Recognition results from Object Recognition and Traffic Light Recognition are consumed in Planning
    • Splitting Perception component on merging points of data from multiple processing paths and considering the frequency and response time around that point

    The following list shows the temporal performance metrics for Perception.

    ID Representation in the model Metric meaning Related functionality Reference value Reason to choose it as a metric Note APER-001 Message rate from CPP #2 to CPP #26 Update rate of Traffic Light Recognition. Traffic Light Recognition 10 Hz Planning relies on fresh and up-to-date perceived data from Traffic Light Recognition for making precise decisions. APER-002 Response time from CPP #0 to CPP #30 Response time from camera input to consumption of the result in Planning. Traffic Light Recognition N/A Planning relies on fresh and up-to-date perceived data from Traffic Light Recognition for making precise decisions. APER-003 Message rate from CPP #25 to CPP #28 Update rate of result from Prediction (Object Recognition) to Planning. Object Recognition 10 Hz Planning relies on fresh and up-to-date perceived data from Perception for creating accurate trajectory. The metric is same as AWOV-001. APER-004 Response time from CPP #6 to CPP #30 Response time from Tracking output of Tracking to its data consumption in Planning. Object Recognition N/A Planning relies on fresh and up-to-date perceived data from Perception for creating accurate trajectory. The metric is same as AWOV-002 and used if delay compensation is disabled in Tracking. APER-005 Response time from CPP #23 to CPP #30 Response time from Tracking output of Tracking to its data consumption in Planning. Object Recognition N/A Planning relies on fresh and up-to-date perceived data from Perception for creating accurate trajectory. The metric is same as AWOV-003 and used if delay compensation is enabled in Tracking. APER-006 Response time from CPP #6 to CPP #21 Duration to process pointcloud data in Sensing and Detection. Object Recognition N/A Tracking relies on Detection to provide real-time and up-to-date perceived data. The metrics is same as AWOV-004 and used if delay compensation is enabled in Tracking. APER-007 Message rate from CPP #20 to CPP #21 Update rate of Detection result received by Tracking. Object Recognition 10 Hz Tracking relies on detection to provide real-time and up-to-date sensed data for accurate tracking. The metric is same as AWOV-005 APER-008 Message rate from CPP #14 to CPP #19 Update rate of data sent from Sensor Fusion. Object Recognition 10 Hz Association Merger relies on the data to be updated at expected frequency for data synchronization. APER-009 Message rate from CPP #16 to CPP #19 Update rate of data sent from Detection by Tracker. Object Recognition 10 Hz Association Merger relies on the data to be updated at expected frequency for data synchronization. APER-010 Message rate from CPP #18 to CPP #19 Update rate of data sent from Validation Object Recognition. 10 Hz Association Merger relies on the data to be updated at expected frequency for data synchronization. APER-011 Response time from CPP #6 to CPP #19 via CPP #14 Response time to consume data sent from Sensor Fusion after LiDARs output pointcloud. Object Recognition N/A Association Merger relies on fresh and up-to-date data for data synchronization. APER-012 Response time from CPP #6 to CPP #19 via CPP #16 Response time to consume data sent from Detection by Tracker after LiDARs output pointcloud. Object Recognition N/A Association Merger relies on fresh and up-to-date data for data synchronization. APER-013 Response time from CPP #6 to CPP #19 via CPP #18 Response time to consume data sent from Validator after LiDARs output pointcloud. Object Recognition N/A Association Merger relies on fresh and up-to-date data for data synchronization. APER-014 Message rate from CPP #10 to CPP #13 Update rate of data sent from Clustering. Object Recognition 10 Hz Sensor Fusion relies on the data to be updated at expected frequency for data synchronization. APER-015 Message rate from CPP #5 to CPP #13 Update rate of data sent from Camera-based Object detection. Object Recognition 10 Hz Sensor Fusion relies on the data to be updated at expected frequency for data synchronization. APER-016 Response time from CPP #6 to CPP #13 Response time to consume data sent from Clustering after LiDARs output pointcloud. Object Recognition N/A Sensor Fusion relies on fresh and up-to-date data for data synchronization. APER-017 Response time from CPP #3 to CPP #13 Response time to consume data sent from Camera-based Object detection after Cameras output images. Object Recognition N/A Sensor Fusion relies on fresh and up-to-date data for data synchronization. APER-018 Message rate from CPP #10 to CPP #17 Update rate of data sent from Clustering. Object Recognition 10 Hz Validator relies on the data to be updated at expected frequency for data synchronization. It seems similar to APER-014, but the topic message is different. APER-019 Message rate from CPP #12 to CPP #17 Update rate of data sent from DNN-based Object Recognition. Object Recognition 10 Hz Validator relies on the data to be updated at expected frequency for data synchronization. APER-020 Response time from CPP #6 to CPP #17 via CPP #10 Response time to consume data sent from Clustering after LiDARs output pointcloud. Object Recognition N/A Validator relies on fresh and update-date data for data synchronization. It seems similar to APER-015, but the topic message is different. APER-021 Response time from CPP #6 to CPP #17 via CPP #12 Response time to consume data sent from DNN-based Object Recognition after LiDARs output pointcloud. Object Recognition N/A Validator relies on fresh and update-date data for data synchronization."},{"location":"how-to-guides/others/defining-temporal-performance-metrics/#detailed-temporal-performance-metrics-for-paths-between-obstacle-segmentation-and-planning","title":"Detailed temporal performance metrics for Paths between Obstacle segmentation and Planning","text":"

    Obstacle segmentation, which is a crucial part of Perception, transmits data to Planning. The figure below illustrates the model that takes into account performance metrics related to Obstacle segmentation and Planning.

    Note

    Both the Obstacle grid map and Obstacle segmentation transmit data to multiple sub-components of Planning. However, not all of these sub-components are described in the model. This is because our primary focus is on the paths from LiDAR to Planning via Obstacle segmentation.

    The following list shows the temporal performance metrics around Obstacle segmentation and Planning.

    ID Representation in the model Metric meaning Related functionality Reference value Reason to choose it as a metric Note OSEG-001 Message rate from CPS #4 to CPS #7 Update rate of Occupancy grid map received by Planning (behavior_path_planner) Obstacle segmentation 10 Hz Planning relies on Occupancy grid map to be updated frequently and smoothly for creating accurate trajectory. OSEG-002 Response time from CPS #0 to CPS #9 via CPS #7 Response time to consume Occupancy grid map after LiDARs output sensing data. Obstacle segmentation N/A Planning relies on fresh and up-to-date perceived data from Occupancy grid map for creating accurate trajectory.. OSEG-003 Message rate from CPS #6 to CPS #11 Update rate of obstacle segmentation received by Planning (behavior_velocity_planner). Obstacle segmentation 10 Hz Planning relies on Obstacle segmentation to be updated frequently and smoothly for creating accurate trajectory. OSEG-004 Response time from CPS #0 to CPS #13 via CPS #11 Response time to consume Obstacle segmentation after LiDARs output sensing data. Obstacle segmentation N/A Planning relies on fresh and up-to-date perceived data from Obstacle segmentation for creating accurate trajectory.."},{"location":"how-to-guides/others/determining-component-dependencies/","title":"Determining component dependencies","text":""},{"location":"how-to-guides/others/determining-component-dependencies/#determining-component-dependencies","title":"Determining component dependencies","text":"

    For any developers who wish to try and deploy Autoware as a microservices architecture, it is necessary to understand the software dependencies, communication, and implemented features of each ROS package/node.

    As an example, the commands necessary to determine the dependencies for the Perception component are shown below.

    "},{"location":"how-to-guides/others/determining-component-dependencies/#perception-component-dependencies","title":"Perception component dependencies","text":"

    To generate a graph of package dependencies, use the following colcon command:

    colcon graph --dot --packages-up-to tier4_perception_launch | dot -Tpng -o graph.png\n

    To generate a list of dependencies, use:

    colcon list --packages-up-to tier4_perception_launch --names-only\n
    colcon list output
    autoware_auto_geometry_msgs\nautoware_auto_mapping_msgs\nautoware_auto_perception_msgs\nautoware_auto_planning_msgs\nautoware_auto_vehicle_msgs\nautoware_cmake\nautoware_lint_common\nautoware_point_types\ncompare_map_segmentation\ndetected_object_feature_remover\ndetected_object_validation\ndetection_by_tracker\neuclidean_cluster\ngrid_map_cmake_helpers\ngrid_map_core\ngrid_map_cv\ngrid_map_msgs\ngrid_map_pcl\ngrid_map_ros\nground_segmentation\nimage_projection_based_fusion\nimage_transport_decompressor\ninterpolation\nkalman_filter\nlanelet2_extension\nlidar_apollo_instance_segmentation\nmap_based_prediction\nmulti_object_tracker\nmussp\nobject_merger\nobject_range_splitter\noccupancy_grid_map_outlier_filter\npointcloud_preprocessor\npointcloud_to_laserscan\nshape_estimation\ntensorrt_yolo\ntier4_autoware_utils\ntier4_debug_msgs\ntier4_pcl_extensions\ntier4_perception_launch\ntier4_perception_msgs\ntraffic_light_classifier\ntraffic_light_map_based_detector\ntraffic_light_ssd_fine_detector\ntraffic_light_visualization\nvehicle_info_util\n

    Tip

    To output a list of modules with their respective paths, run the command above without the --names-only parameter.

    To see which ROS topics are being subscribed and published to, use rqt_graph as follows:

    ros2 launch tier4_perception_launch perception.launch.xml mode:=lidar\nros2 run rqt_graph rqt_graph\n
    "},{"location":"how-to-guides/others/fixing-dependent-package-versions/","title":"Fixing dependent package versions","text":""},{"location":"how-to-guides/others/fixing-dependent-package-versions/#fixing-dependent-package-versions","title":"Fixing dependent package versions","text":"

    Autoware manages dependent package versions in autoware.repos. For example, let's say you make a branch in autoware.universe and add new features. Suppose you update other dependencies with vcs pull after cutting a branch from autoware.universe. Then the version of autoware.universe you are developing and other dependencies will become inconsistent, and the entire Autoware build will fail. We recommend saving the dependent package versions by executing the following command when starting the development.

    vcs export src --exact > my_autoware.repos\n
    "},{"location":"how-to-guides/others/lane-detection-methods/","title":"Lane Detection Methods","text":""},{"location":"how-to-guides/others/lane-detection-methods/#lane-detection-methods","title":"Lane Detection Methods","text":""},{"location":"how-to-guides/others/lane-detection-methods/#overview","title":"Overview","text":"

    This document describes some of the most common lane detection methods used in the autonomous driving industry. Lane detection is a crucial task in autonomous driving, as it is used to determine the boundaries of the road and the vehicle's position within the lane.

    "},{"location":"how-to-guides/others/lane-detection-methods/#methods","title":"Methods","text":"

    This document covers the methods under two categories: lane detection methods and multitask detection methods.

    Note

    The results have been obtained using pre-trained models. Training the model with your own data will yield more successful results.

    "},{"location":"how-to-guides/others/lane-detection-methods/#lane-detection-methods_1","title":"Lane Detection Methods","text":""},{"location":"how-to-guides/others/lane-detection-methods/#clrernet","title":"CLRerNet","text":"

    This work introduce LaneIoU, which improves confidence score accuracy by considering local lane angles, and CLRerNet, a novel detector leveraging LaneIoU.

    • Paper: CLRerNet: Improving Confidence of Lane Detection with LaneIoU
    • Code: GitHub
    Method Backbone Dataset Confidence Campus Video Road Video CLRerNet dla34 culane 0.4 CLRerNet dla34 culane 0.1 CLRerNet dla34 culane 0.01"},{"location":"how-to-guides/others/lane-detection-methods/#clrnet","title":"CLRNet","text":"

    This work introduce Cross Layer Refinement Network (CLRNet) to fully utilize high-level semantic and low-level detailed features in lane detection. CLRNet detects lanes with high-level features and refines them with low-level details. Additionally, ROIGather technique and Line IoU loss significantly enhance localization accuracy, outperforming state-of-the-art methods.

    • Paper: CLRNet: Cross Layer Refinement Network for Lane Detection
    • Code: GitHub
    Method Backbone Dataset Confidence Campus Video Road Video CLRNet dla34 culane 0.2 CLRNet dla34 culane 0.1 CLRNet dla34 culane 0.01 CLRNet dla34 llamas 0.4 CLRNet dla34 llamas 0.2 CLRNet dla34 llamas 0.1 CLRNet resnet18 llamas 0.4 CLRNet resnet18 llamas 0.2 CLRNet resnet18 llamas 0.1 CLRNet resnet18 tusimple 0.2 CLRNet resnet18 tusimple 0.1 CLRNet resnet34 culane 0.1 CLRNet resnet34 culane 0.05 CLRNet resnet101 culane 0.2 CLRNet resnet101 culane 0.1"},{"location":"how-to-guides/others/lane-detection-methods/#fenet","title":"FENet","text":"

    This research introduces Focusing Sampling, Partial Field of View Evaluation, Enhanced FPN architecture, and Directional IoU Loss, addressing challenges in precise lane detection for autonomous driving. Experiments show that Focusing Sampling, which emphasizes distant details crucial for safety, significantly improves both benchmark and practical curved/distant lane recognition accuracy over uniform approaches.

    • Paper: FENet: Focusing Enhanced Network for Lane Detection
    • Code: GitHub
    Method Backbone Dataset Confidence Campus Video Road Video FENet v1 dla34 culane 0.2 FENet v1 dla34 culane 0.1 FENet v1 dla34 culane 0.05 FENet v2 dla34 culane 0.2 FENet v2 dla34 culane 0.1 FENet v2 dla34 culane 0.05 FENet v2 dla34 llamas 0.4 FENet v2 dla34 llamas 0.2 FENet v2 dla34 llamas 0.1 FENet v2 dla34 llamas 0.05"},{"location":"how-to-guides/others/lane-detection-methods/#multitask-detection-methods","title":"Multitask Detection Methods","text":""},{"location":"how-to-guides/others/lane-detection-methods/#yolopv2","title":"YOLOPv2","text":"

    This work proposes an efficient multi-task learning network for autonomous driving, combining traffic object detection, drivable road area segmentation, and lane detection. YOLOPv2 model achieves new state-of-the-art performance in accuracy and speed on the BDD100K dataset, halving the inference time compared to previous benchmarks.

    • Paper: YOLOPv2: Better, Faster, Stronger for Panoptic Driving Perception
    • Code: GitHub
    Method Campus Video Road Video YOLOPv2"},{"location":"how-to-guides/others/lane-detection-methods/#hybridnets","title":"HybridNets","text":"

    This work introduces HybridNets, an end-to-end perception network for autonomous driving. It optimizes segmentation heads and box/class prediction networks using a weighted bidirectional feature network. HybridNets achieves good performance on BDD100K and Berkeley DeepDrive datasets, outperforming state-of-the-art methods.

    • Paper: HybridNets: End-to-End Perception Network
    • Code: GitHub
    Method Campus Video Road Video HybridNets"},{"location":"how-to-guides/others/lane-detection-methods/#twinlitenet","title":"TwinLiteNet","text":"

    This work introduces TwinLiteNet, a lightweight model designed for driveable area and lane line segmentation in autonomous driving.

    • Paper : TwinLiteNet: An Efficient and Lightweight Model for Driveable Area and Lane Segmentation in Self-Driving Cars
    • Code: GitHub
    Method Campus Video Road Video Twinlitenet"},{"location":"how-to-guides/others/lane-detection-methods/#citation","title":"Citation","text":"
    @article{honda2023clrernet,\ntitle={CLRerNet: Improving Confidence of Lane Detection with LaneIoU},\nauthor={Hiroto Honda and Yusuke Uchida},\njournal={arXiv preprint arXiv:2305.08366},\nyear={2023},\n}\n
    @InProceedings{Zheng_2022_CVPR,\nauthor    = {Zheng, Tu and Huang, Yifei and Liu, Yang and Tang, Wenjian and Yang, Zheng and Cai, Deng and He, Xiaofei},\ntitle     = {CLRNet: Cross Layer Refinement Network for Lane Detection},\nbooktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},\nmonth     = {June},\nyear      = {2022},\npages     = {898-907}\n}\n
    @article{wang&zhong_2024fenet,\ntitle={FENet: Focusing Enhanced Network for Lane Detection},\nauthor={Liman Wang and Hanyang Zhong},\nyear={2024},\neprint={2312.17163},\narchivePrefix={arXiv},\nprimaryClass={cs.CV}\n}\n
    @misc{vu2022hybridnets,\ntitle={HybridNets: End-to-End Perception Network},\nauthor={Dat Vu and Bao Ngo and Hung Phan},\nyear={2022},\neprint={2203.09035},\narchivePrefix={arXiv},\nprimaryClass={cs.CV}\n}\n
    @INPROCEEDINGS{10288646,\nauthor={Che, Quang-Huy and Nguyen, Dinh-Phuc and Pham, Minh-Quan and Lam, Duc-Khai},\nbooktitle={2023 International Conference on Multimedia Analysis and Pattern Recognition (MAPR)},\ntitle={TwinLiteNet: An Efficient and Lightweight Model for Driveable Area and Lane Segmentation in Self-Driving Cars},\nyear={2023},\nvolume={},\nnumber={},\npages={1-6},\ndoi={10.1109/MAPR59823.2023.10288646}\n}\n
    "},{"location":"how-to-guides/others/planning-evaluation-using-scenarios/","title":"Planning Evaluation Using Scenarios","text":""},{"location":"how-to-guides/others/planning-evaluation-using-scenarios/#planning-evaluation-using-scenarios","title":"Planning Evaluation Using Scenarios","text":""},{"location":"how-to-guides/others/planning-evaluation-using-scenarios/#introduction","title":"Introduction","text":"

    The performance of Autoware's planning stack is evaluated through a series of defined scenarios that outline the behaviors of other road users and specify the success and failure conditions for the autonomous vehicle, referred to as \"Ego.\" These scenarios are specified in a machine-readable format and executed using the Scenario Simulator tool. Below, brief definitions of the three primary tools used to create, execute, and analyze these scenarios are provided.

    • Scenario Editor: A web-based GUI tool used to create and edit scenario files easily.
    • Scenario Simulator: A tool designed to simulate these machine-readable scenarios. It configures the environment, dictates the behavior of other road users, and sets both success and failure conditions.
    • Autoware Evaluator: A web-based CI/CD platform that facilitates the uploading, categorization, and execution of scenarios along with their corresponding map files. It allows for parallel scenario executions and provides detailed results.
    "},{"location":"how-to-guides/others/planning-evaluation-using-scenarios/#creating-scenarios","title":"Creating Scenarios","text":"

    Scenarios are created using the Scenario Editor tool, which provides a user-friendly interface to define road users, their behaviors, and the success and failure conditions for the ego vehicle.

    To demonstrate the scenario creation process, we will create a simple scenario.

    "},{"location":"how-to-guides/others/planning-evaluation-using-scenarios/#scenario-definition","title":"Scenario Definition","text":"
    • Initial condition: The ego vehicle is driving in a straight lane, and a bicycle is stopped in the same lane.
    • Action: When the longitudinal distance between the ego vehicle and the bicycle is less than 10 meters, the bicycle starts moving at a speed of 2 m/s.
    • Success condition: The ego vehicle reaches the goal point.
    • Failure condition: The ego vehicle collides with the bicycle.
    "},{"location":"how-to-guides/others/planning-evaluation-using-scenarios/#scenario-creation","title":"Scenario Creation","text":"
    1. Open the Scenario Editor tool.
    2. Load the map from the MAP tab. For this tutorial, we will use the LEO-VM-00001 map, which can be downloaded from the Autoware Evaluators' Maps section.
    3. In the Edit tab, add both the ego vehicle and the bicycle from the Add Entity section. After adding them, select the ego vehicle and set its destination from the Edit tab.

    4. After setting the positions of the bicycle, ego vehicle, and ego vehicle's destination, set the initial velocity of the bicycle to 0 m/s as shown below. Then, click the Scenario tab and click on the Add new act button. Using this, define an action that starts the bicycle moving when the distance condition is satisfied.

    5. As shown below, define a start condition for an action named act_start_bicycle. This condition checks the distance between the ego vehicle and the bicycle. If the distance is less than or equal to 10 meters, the action is triggered.

    6. After the start condition, define an event for this action. Since there are no other conditions for this event, set a dummy condition where SimulationTime is greater than 0, as shown below.

    7. Define an action for this event. This action will set the velocity of the actor, Bicycle0, to 2 m/s.

    8. The scenario is now ready to be tested. You can export the scenario from the Scenario tab by clicking the Export button.

    "},{"location":"how-to-guides/others/planning-evaluation-using-scenarios/#important-notes-for-scenario-creation","title":"Important Notes for Scenario Creation","text":"
    • The tutorial we made is a simple scenario implementation for more complex definitions and other road user actions, please refer to the Scenario Simulator documentation.
    • Due to the complexity of the scenario format, support for editing in the GUI is not yet complete for all scenario features. Therefore, if there are any items that cannot be supported or are difficult to edit using the GUI, you can edit and adjust the scenario directly using a text editor. In that case, the schema of the scenario format is checked before saving, so you can edit with confidence.
    • Best way to understand other complex scenario implementations is to look at the existing scenarios. You can check the scenarios in the Autoware Evaluator to see other implementations.
    "},{"location":"how-to-guides/others/planning-evaluation-using-scenarios/#running-scenarios-using-scenario-simulator","title":"Running Scenarios using Scenario Simulator","text":""},{"location":"how-to-guides/others/planning-evaluation-using-scenarios/#creating-the-scenario-simulator-environment","title":"Creating the Scenario Simulator Environment","text":"
    • Clone the Autoware main repository.
    git clone https://github.com/autowarefoundation/autoware.git\n
    • Create source directory.
    cd autoware\nmkdir src\n
    • Import the necessary repositories both for the Autoware and Scenario Simulator.
    vcs import src < autoware.repos\nvcs import src < simulator.repos\n
    • If you are installing Autoware for the first time, you can automatically install the dependencies by using the provided Ansible script. Please refer to the Autoware source installation page for more information.
    ./setup-dev-env.sh\n
    • Install dependent ROS packages.

    Autoware and Scenario Simulator require some ROS 2 packages in addition to the core components. The tool rosdep allows an automatic search and installation of such dependencies. You might need to run rosdep update before rosdep install.

    source /opt/ros/humble/setup.bash\nrosdep install -y --from-paths src --ignore-src --rosdistro $ROS_DISTRO\n
    • Build the workspace.
    colcon build --symlink-install --cmake-args -DCMAKE_BUILD_TYPE=Release\n
    • If you have any issues with the installation, or if you need more detailed instructions, please refer to the Autoware source installation page.
    "},{"location":"how-to-guides/others/planning-evaluation-using-scenarios/#execute-the-scenario-by-using-scenario-simulator","title":"Execute the scenario by using Scenario Simulator","text":"

    We created an example scenario above and exported it into our local machine. Now, we will execute this scenario using Scenario Simulator.

    • Firstly, we should define the lanelet2_map.osm and pointcloud_map.pcd files in the scenario file. In default, the configuration is below:
    RoadNetwork:\nLogicFile:\nfilepath: lanelet2_map.osm\nSceneGraphFile:\nfilepath: lanelet2_map.pcd\n

    User should change the filepath field according to the map file we downloaded before. You can find both lanelet2_map.osm and pointcloud_map.pcd files in the Maps section of the Autoware Evaluator.

    Note: Although the pointcloud_map.pcd file is mandatory, it is not used in the simulation. Therefore, you can define a dummy file for this field.

    • After defining the map files, we can execute the scenario by using the Scenario Simulator. The command is below:
    ros2 launch scenario_test_runner scenario_test_runner.launch.py \\\nrecord:=false \\\nscenario:='/path/to/scenario/sample.yaml' \\\nsensor_model:=sample_sensor_kit \\\nvehicle_model:=sample_vehicle\n
    • Now, the scenario will be executed in the Scenario Simulator. You can see the simulation in the RViz window.
    • To see the condition details in the scenario while playing it in RViz, you should enable the Condition Group marker, it is not enabled by default.

    • By using the Condition Group marker, you can see the conditions that are satisfied and not satisfied in the scenario. So it would be helpful to investigate the scenario.

    "},{"location":"how-to-guides/others/planning-evaluation-using-scenarios/#understanding-the-autoware-evaluator","title":"Understanding the Autoware Evaluator","text":"

    Autoware Evaluator is a web-based CI/CD platform that facilitates the uploading, categorization, and execution of scenarios along with their corresponding map files. It allows for parallel scenario executions and provides detailed results.

    Note: The Autoware Evaluator is a private tool. Currently, you should be invited to the AWF project to access it. However, the AWF project will be public, it is under development.

    • Firstly, you should create a free TIER IV account to access the Autoware Evaluator. You can create an account by using the TIER IV account creation page.
    • After you created an account, please reach out to Hiroshi IGATA (hiroshi.igata@tier4.jp), the AWF ODD WG leader, for an invitation to AWF project of the Autoware Evaluator. It is advised to join the ODD WG meeting once to briefly introduce yourself and your interest. The ODD WG information is here.

    Let's explore the Autoware Evaluator interface page by page.

    "},{"location":"how-to-guides/others/planning-evaluation-using-scenarios/#scenarios","title":"Scenarios","text":"

    In Scenarios tab, user can view, search, and filter all the uploaded scenarios. If you are looking for a specific scenario, you can use this page's search bar to find it. If the scenario labeled ticked, it means that the scenario is reviewed and approved by manager, otherwise, it waits for a review.

    "},{"location":"how-to-guides/others/planning-evaluation-using-scenarios/#catalogs","title":"Catalogs","text":"

    The Catalogs tab displays scenario catalogs which means a group of suites that were created for a specific use case. In Evaluator, user can execute a test for all scenarios in a catalog.

    "},{"location":"how-to-guides/others/planning-evaluation-using-scenarios/#suites","title":"Suites","text":"

    Suites mean a group of scenarios that were created for a specific testing purposes, it is the smallest unit of the scenario group. It can be assigned to any catalog to be tested for a use case.

    "},{"location":"how-to-guides/others/planning-evaluation-using-scenarios/#maps","title":"Maps","text":"

    The Maps tab displays a list of maps that are used in the scenarios. Developers can find the map that they need to execute the scenario.

    "},{"location":"how-to-guides/others/planning-evaluation-using-scenarios/#reports","title":"Reports","text":"

    The Reports tab displays a list of results from scenario tests. Here, users can view the most recent test executions.

    The screenshot above illustrates the latest test execution for the public road bus catalog. In the source column, the default Autoware repository with the latest commit is displayed. For a deeper analysis, users can click on the title to view comprehensive results.

    Clicking on the title of a test execution leads to a page displaying build logs and individual test results for each suite. By selecting links in the ID column, users can access detailed results for each suits.

    After clicking the ID link, users can view detailed test results for the scenarios within the suite. For instance, in our case, 183 out of 198 scenarios succeeded. This page is a useful resource for developers to further investigate scenarios that failed.

    • ID links to a page which shows detailed test results for the scenario.
    • Scenario Name links to a page displaying the scenario's definition.
    • Labels indicate the conditions under which the scenario failed; if successful, this field remains empty.
    • Status shows the count of failed and succeeded cases within a scenario.

    To investigate deeper for this specific scenario, we should click into ID link.

    For this scenario, three of the four cases failed. Each case represents changing the ego_speed parameter. To deep dive into the failed cases, click on the ID link.

    This page show us the detailed information of the failed case. We can understand why the case failing by looking into the Message. For our case, the message is Simulation failure: CustomCommandAction typed \"exitFailure\" was triggered by the anonymous Condition (OpenSCENARIO.Storyboard.Story[0].Act[\"_EndCondition\"].ManeuverGroup[0].Maneuver[0].Event[1].StartTrigger.ConditionGroup[1].Condition[0]): The state of StoryboardElement \"act_ego_nostop_check\" (= completeState) is given state completeState?

    From this message, we can understand that an action which is used for checking the vehicle is not stopped is in complete state. And, the scenario author set this condition as a failure condition. Therefore, the scenario failed.

    • To test the scenario in local, you can download the scenario and its map by using links which are marked image above. Running scenario on local machine was explained in the previous section.
    • To compare the test executions to analyze when the scenarios were successfully running, you can use Compare Reports button in the page which shows result of previous test executions.

    • To see how the scenario failed, you can replay the scenario by clicking the Play button. It shows the executed test.

    • a) The bar that assists us in rewinding or fast-forwarding
    • b) Shows the success and failing conditions are satisfied or not
    • c) Shows the state of the actions and events and their start conditions
    "},{"location":"how-to-guides/others/planning-evaluation-using-scenarios/#for-further-support","title":"For Further Support","text":"

    Creating scenarios, analyzing them using Autoware Evaluator would be complicated for new users. If you need further help or support, you can reach out to the Autoware community through the Autoware Discord server, and you can ask questions about scenario creation process or Autoware Evaluator in odd channel. Also, you can attend the weekly ODD Working Group meeting to get support from the community. You can join the ODD WG invitation group to get the meeting invitations.

    • ODD WG meetings are held weekly in the single time slot which is 2:00pm, Monday (UTC).
    "},{"location":"how-to-guides/others/planning-evaluation-using-scenarios/#other-useful-pages","title":"Other Useful Pages","text":"
    • https://docs.web.auto/en/user-manuals/
    • https://tier4.github.io/scenario_simulator_v2-docs/
    "},{"location":"how-to-guides/others/reducing-start-delays/","title":"Reducing start delays on real vehicles","text":""},{"location":"how-to-guides/others/reducing-start-delays/#reducing-start-delays-on-real-vehicles","title":"Reducing start delays on real vehicles","text":"

    In simulation, the ego vehicle reacts nearly instantly to the control commands generated by Autoware. However, with a real vehicle, some delays occur that may make ego feel less responsive.

    This page presents start delays experienced when using Autoware on a real vehicle. We define the start delay as the time between (a) when Autoware decides to make the ego vehicle start and (b) when the vehicle actually starts moving. More precisely:

    • (a) is the time when the speed or acceleration command output by Autoware switches to a non-zero value.
    • (b) is the time when the measured velocity of the ego vehicle switches to a positive value.
    "},{"location":"how-to-guides/others/reducing-start-delays/#start-delay-with-manual-driving","title":"Start delay with manual driving","text":"

    First, let us look at the start delay when a human is driving.

    The following figure shows the start delay when a human driver switches the gear from parked to drive and instantly releases the brake to push the throttle pedal and make the velocity of the vehicle increase.

    There are multiple things to note from this figure.

    • Brake (red): despite the driver instantly releasing the brake pedal, we see that the measured brake takes around 150ms to go from 100% to 0%.
    • Gear (orange): the driver switches gear before releasing the brake pedal, but the gear is measured to switch after the brake is released.
    • Throttle (green) and velocity (blue): the driver pushes the throttle pedal and the vehicle is measured to start moving around 500ms later.
    "},{"location":"how-to-guides/others/reducing-start-delays/#filter-delay","title":"Filter delay","text":"

    To guarantee passenger comfort, some Autoware modules implement filters on the jerk of the vehicle, preventing sudden changes in acceleration.

    For example, the vehicle_cmd_gate filters the acceleration command generated by the controller and was previously introducing significant delays when transitioning between a stop command where the acceleration is negative, and a move command where the acceleration is positive. Because of the jerk filter, the transition between negative and positive was not instantaneous and would take several hundreds of milliseconds.

    "},{"location":"how-to-guides/others/reducing-start-delays/#gear-delay","title":"Gear delay","text":"

    In many vehicles, it is necessary to change gear before first starting to move the vehicle. When performed autonomously, this gear change can take some significant time. Moreover, as seen from the data recorded with manual driving, the measured gear value may be delayed.

    In Autoware, the controller sends a stopping control command until the gear is changed to the drive state. This means that delays in the gear change and its reported value can greatly impact the start delay. Note that this is only an issue when the vehicle is initially in the parked gear.

    The only way to reduce this delay is by tuning the vehicle to increase the gear change speed or to reduce the delay in the gear change report.

    "},{"location":"how-to-guides/others/reducing-start-delays/#brake-delay","title":"Brake delay","text":"

    In vehicles with a brake pedal, the braking system will often be made of several moving parts which cannot move instantly. Thus, when Autoware sends brake commands to a vehicle, some delays should be expected in the actual brake applied to the wheels.

    This lingering brake may prevent or delay the initial motion of the ego vehicle.

    This delay can be reduced by tuning the vehicle.

    "},{"location":"how-to-guides/others/reducing-start-delays/#throttle-response","title":"Throttle response","text":"

    For vehicles with throttle control, one of the main cause of start delays is due to the throttle response of the vehicle. When pushing the throttle pedal, the wheels of the vehicle do not instantly start rotating. This is partly due to the inertia of the vehicle, but also to the motor which may take a significant time to start applying some torque to the wheels.

    It may be possible to tune some vehicle side parameters to reduce this delay, but it is often done at the cost of reduced energy efficiency.

    On the Autoware side, the only way to decrease this delay is to increase the initial throttle but this can cause uncomfortably high initial accelerations.

    "},{"location":"how-to-guides/others/reducing-start-delays/#initial-acceleration-and-throttle","title":"Initial acceleration and throttle","text":"

    As we just discussed, for vehicles with throttle control, an increased initial throttle value can reduce the start delay.

    Since Autoware outputs an acceleration value, the conversion module raw_vehicle_cmd_converter is used to map the acceleration value from Autoware to a throttle value to be sent to the vehicle. Such mapping is usually calibrated automatically using the accel_brake_map_calibrator module, but it may produce a low initial throttle which leads to high start delays.

    In order to increase the initial throttle, there are two options: increase the initial acceleration output by Autoware, or modify the acceleration to throttle mapping.

    The initial acceleration output by Autoware can be tuned in the motion_velocity_smoother with parameters engage_velocity and engage_acceleration. However, the vehicle_cmd_gate applies a filter on the control command to prevent too sudden changes in jerk and acceleration, limiting the maximum allowed acceleration while the ego vehicle is stopped.

    Alternatively, the mapping of acceleration can be tuned to increase the throttle corresponding to the initial acceleration. If we look at an example acceleration map, it does the following conversion: when the ego velocity is 0 (first column), acceleration values between 0.631 (first row) and 0.836 (second row) are converted to a throttle between 0% and 10%. This means that any initial acceleration bellow 0.631m/s\u00b2 will not produce any throttle. Keep in mind that after tuning the acceleration map, it may be necessary to also update the brake map.

    default 0 1.39 2.78 4.17 5.56 6.94 8.33 9.72 11.11 12.5 13.89 0 0.631 0.11 -0.04 -0.04 -0.041 -0.096 -0.137 -0.178 -0.234 -0.322 -0.456 0.1 0.836 0.57 0.379 0.17 0.08 0.07 0.068 0.027 -0.03 -0.117 -0.251 0.2 1.129 0.863 0.672 0.542 0.4 0.38 0.361 0.32 0.263 0.176 0.042 0.3 1.559 1.293 1.102 0.972 0.887 0.832 0.791 0.75 0.694 0.606 0.472 0.4 2.176 1.909 1.718 1.588 1.503 1.448 1.408 1.367 1.31 1.222 1.089 0.5 3.027 2.76 2.57 2.439 2.354 2.299 2.259 2.218 2.161 2.074 1.94"},{"location":"how-to-guides/others/running-autoware-without-cuda/","title":"Running Autoware without CUDA","text":""},{"location":"how-to-guides/others/running-autoware-without-cuda/#running-autoware-without-cuda","title":"Running Autoware without CUDA","text":"

    Although CUDA installation is recommended to achieve better performance for object detection and traffic light recognition in Autoware Universe, it is possible to run these algorithms without CUDA. The following subsections briefly explain how to run each algorithm in such an environment.

    "},{"location":"how-to-guides/others/running-autoware-without-cuda/#running-2d3d-object-detection-without-cuda","title":"Running 2D/3D object detection without CUDA","text":"

    Autoware Universe's object detection can be run using one of five possible configurations:

    • lidar_centerpoint
    • lidar_apollo_instance_segmentation
    • lidar-apollo + tensorrt_yolo
    • lidar-centerpoint + tensorrt_yolo
    • euclidean_cluster

    Of these five configurations, only the last one (euclidean_cluster) can be run without CUDA. For more details, refer to the euclidean_cluster module's README file.

    "},{"location":"how-to-guides/others/running-autoware-without-cuda/#running-traffic-light-detection-without-cuda","title":"Running traffic light detection without CUDA","text":"

    For traffic light recognition (both detection and classification), there are two modules that require CUDA:

    • traffic_light_ssd_fine_detector
    • traffic_light_classifier

    To run traffic light detection without CUDA, set enable_fine_detection to false in the traffic light launch file. Doing so disables the traffic_light_ssd_fine_detector such that traffic light detection is handled by the map_based_traffic_light_detector module instead.

    To run traffic light classification without CUDA, set use_gpu to false in the traffic light classifier launch file. Doing so will force the traffic_light_classifier to use a different classification algorithm that does not require CUDA or a GPU.

    "},{"location":"how-to-guides/others/using-divided-map/","title":"Using divided pointcloud map","text":""},{"location":"how-to-guides/others/using-divided-map/#using-divided-pointcloud-map","title":"Using divided pointcloud map","text":"

    Divided pointcloud map is necessary when handling large pointcloud map, in which case Autoware may not be capable of sending the whole map via ROS 2 topic or loading the whole map into memory. By using the pre-divided map, Autoware will dynamically load the pointcloud map according to the vehicle's position.

    "},{"location":"how-to-guides/others/using-divided-map/#tutorial","title":"Tutorial","text":"

    Download the sample-map-rosbag_split and locate the map under $HOME/autoware_map/.

    gdown -O ~/autoware_map/ 'https://docs.google.com/uc?export=download&id=11tLC9T4MS8fnZ9Wo0D8-Ext7hEDl2YJ4'\nunzip -d ~/autoware_map/ ~/autoware_map/sample-rosbag_split.zip\n

    Then, you may launch logging_simulator with the following command to load the divided map. Note that you need to specify the map_path and pointcloud_map_file arguments.

    source ~/autoware/install/setup.bash\nros2 launch autoware_launch logging_simulator.launch.xml \\\nmap_path:=$HOME/autoware_map/sample-map-rosbag pointcloud_map_file:=pointcloud_map \\\nvehicle_model:=sample_vehicle_split sensor_model:=sample_sensor_kit\n

    For playing rosbag to simulate Autoware, please refer to the instruction in the tutorial for rosbag replay simulation.

    "},{"location":"how-to-guides/others/using-divided-map/#related-links","title":"Related links","text":"
    • For specific format definition of the divided map, please refer to Map component design page
    • The Readme of map_loader may be useful specific instructions for dividing maps
    • When dividing your own pointcloud map, you may use pointcloud_divider, which can divide the map as well as generating the compatible metadata
    "},{"location":"how-to-guides/training-machine-learning-models/training-models/","title":"Training and Deploying Models","text":""},{"location":"how-to-guides/training-machine-learning-models/training-models/#training-and-deploying-models","title":"Training and Deploying Models","text":""},{"location":"how-to-guides/training-machine-learning-models/training-models/#overview","title":"Overview","text":"

    The Autoware offers a comprehensive array of machine learning models, tailored for a wide range of tasks including 2D and 3D object detection, traffic light recognition and more. These models have been meticulously trained utilizing open-mmlab's extensive repositories. By leveraging the provided scripts and following the training steps, you have the capability to train these models using your own dataset, tailoring them to your specific needs.

    Furthermore, you will find the essential conversion scripts to deploy your trained models into Autoware using the mmdeploy repository.

    "},{"location":"how-to-guides/training-machine-learning-models/training-models/#training-traffic-light-classifier-model","title":"Training traffic light classifier model","text":"

    The traffic light classifier model within the Autoware has been trained using the mmlab/pretrained repository. The Autoware offers pretrained models based on EfficientNet-b1 and MobileNet-v2 architectures. To fine-tune these models, a total of 83,400 images were employed, comprising 58,600 for training, 14,800 for evaluation, and 10,000 for testing. These images represent Japanese traffic lights and were trained using TIER IV's internal dataset.

    Name Input Size Test Accuracy EfficientNet-b1 128 x 128 99.76% MobileNet-v2 224 x 224 99.81%

    Comprehensive training instructions for the traffic light classifier model are detailed within the readme file accompanying \"traffic_light_classifier\" package. These instructions will guide you through the process of training the model using your own dataset. To facilitate your training, we have also provided an example dataset containing three distinct classes (green, yellow, red), which you can leverage during the training process.

    Detailed instructions for training the traffic light classifier model can be found here.

    "},{"location":"how-to-guides/training-machine-learning-models/training-models/#training-centerpoint-3d-object-detection-model","title":"Training CenterPoint 3D object detection model","text":"

    The CenterPoint 3D object detection model within the Autoware has been trained using the autowarefoundation/mmdetection3d repository.

    To train custom CenterPoint models and convert them into ONNX format for deployment in Autoware, please refer to the instructions provided in the README file included with Autoware's lidar_centerpoint package. These instructions will provide a step-by-step guide for training the CenterPoint model.

    In order to assist you with your training process, we have also included an example dataset in the TIER IV dataset format.

    This dataset contains 600 lidar frames and covers 5 classes, including 6905 cars, 3951 pedestrians, 75 cyclists, 162 buses, and 326 trucks.

    You can utilize this example dataset to facilitate your training efforts.

    "},{"location":"installation/","title":"Installation","text":""},{"location":"installation/#installation","title":"Installation","text":""},{"location":"installation/#target-platforms","title":"Target platforms","text":"

    Autoware targets the platforms listed below. It may change in future versions of Autoware.

    The Autoware Foundation provides no support on other platforms than those listed below.

    "},{"location":"installation/#architecture","title":"Architecture","text":"
    • amd64
    • arm64
    "},{"location":"installation/#minimum-hardware-requirements","title":"Minimum hardware requirements","text":"

    Info

    Autoware is scalable and can be customized to work with distributed or less powerful hardware. The minimum hardware requirements given below are just a general recommendation. However, performance will be improved with more cores, RAM and a higher-spec graphics card or GPU core.

    Although GPU is not required to run basic functionality, it is mandatory to enable the following neural network related functions: - LiDAR based object detection - Camera based object detection - Traffic light detection and classification

    • CPU with 8 cores
    • 16GB RAM
    • [Optional] NVIDIA GPU (4GB RAM)

    For details of how to enable object detection and traffic light detection/classification without a GPU, refer to the Running Autoware without CUDA.

    "},{"location":"installation/#installing-autoware","title":"Installing Autoware","text":"

    There are two ways to set up Autoware. Choose one according to your preference.

    If any issues occur during installation, refer to the Support page.

    "},{"location":"installation/#1-docker-installation","title":"1. Docker installation","text":"

    Autoware's Open AD Kit containers enables you to run Autoware easily on your host machine ensuring same environment for all deployments without installing any dependencies. Full Guide on Docker Installation Setup.

    Open AD Kit is also the First SOAFEE Blueprint for autonomous driving that offers extensible modular containers for making it easier to run Autoware's AD stack on distributed systems. Full Guide on Open AD Kit Setup.

    Developer containers are also available for developers making it easier to build Autoware from source and ensuring same environment for all developers.

    "},{"location":"installation/#2-source-installation","title":"2. Source installation","text":"

    Source installation is for the cases where more granular control of the installation environment is needed. It is recommended for experienced users or people who want to customize their environment. Note that some problems may occur depending on your local environment.

    For more information, refer to the source installation guide.

    "},{"location":"installation/#installing-related-tools","title":"Installing related tools","text":"

    Some other tools are required depending on the evaluation you want to do. For example, to run an end-to-end simulation you need to install an appropriate simulator.

    For more information, see here.

    "},{"location":"installation/#additional-settings-for-developers","title":"Additional settings for developers","text":"

    There are also tools and settings for developers, such as Shells or IDEs.

    For more information, see here.

    "},{"location":"installation/additional-settings-for-developers/","title":"Additional settings for developers","text":""},{"location":"installation/additional-settings-for-developers/#additional-settings-for-developers","title":"Additional settings for developers","text":"
    • Console settings for ROS 2
    • Network configuration for ROS 2
    "},{"location":"installation/additional-settings-for-developers/console-settings/","title":"Console settings for ROS 2","text":""},{"location":"installation/additional-settings-for-developers/console-settings/#console-settings-for-ros-2","title":"Console settings for ROS 2","text":""},{"location":"installation/additional-settings-for-developers/console-settings/#colorizing-logger-output","title":"Colorizing logger output","text":"

    By default, ROS 2 logger doesn't colorize the output. To colorize it, add the following to your ~/.bashrc:

    export RCUTILS_COLORIZED_OUTPUT=1\n
    "},{"location":"installation/additional-settings-for-developers/console-settings/#customizing-the-format-of-logger-output","title":"Customizing the format of logger output","text":"

    By default, ROS 2 logger doesn't output detailed information such as file name, function name, or line number. To customize it, add the following to your ~/.bashrc:

    export RCUTILS_CONSOLE_OUTPUT_FORMAT=\"[{severity} {time}] [{name}]: {message} ({function_name}() at {file_name}:{line_number})\"\n

    For more options, see here.

    "},{"location":"installation/additional-settings-for-developers/console-settings/#colorized-googletest-output","title":"Colorized GoogleTest output","text":"

    Add export GTEST_COLOR=1 to your ~/.bashrc.

    For more details, refer to Advanced GoogleTest Topics: Colored Terminal Output.

    This is useful when running tests with colcon test.

    "},{"location":"installation/additional-settings-for-developers/network-configuration/","title":"Network settings for ROS 2 and Autoware","text":""},{"location":"installation/additional-settings-for-developers/network-configuration/#network-settings-for-ros-2-and-autoware","title":"Network settings for ROS 2 and Autoware","text":""},{"location":"installation/additional-settings-for-developers/network-configuration/#single-computer-setup","title":"Single computer setup","text":"
    • DDS settings for ROS 2 and Autoware
    "},{"location":"installation/additional-settings-for-developers/network-configuration/#multi-computer-setup","title":"Multi computer setup","text":"
    • Communicating across multiple computers with CycloneDDS
    "},{"location":"installation/additional-settings-for-developers/network-configuration/dds-settings/","title":"DDS settings for ROS 2 and Autoware","text":""},{"location":"installation/additional-settings-for-developers/network-configuration/dds-settings/#dds-settings-for-ros-2-and-autoware","title":"DDS settings for ROS 2 and Autoware","text":""},{"location":"installation/additional-settings-for-developers/network-configuration/dds-settings/#enable-localhost-only-communication","title":"Enable localhost-only communication","text":"
    1. Enable multicast for lo
    2. Make sure export ROS_LOCALHOST_ONLY=1 is NOT present in .bashrc.
      • See About ROS_LOCALHOST_ONLY environment variable for more information.
    "},{"location":"installation/additional-settings-for-developers/network-configuration/dds-settings/#tune-dds-settings","title":"Tune DDS settings","text":"

    Autoware uses DDS for internode communication. ROS 2 documentation recommends users to tune DDS to utilize its capability.

    Note

    CycloneDDS is the recommended and most tested DDS implementation for Autoware.

    Warning

    If you don't tune these settings, Autoware will fail to receive large data like point clouds or images.

    "},{"location":"installation/additional-settings-for-developers/network-configuration/dds-settings/#tune-system-wide-network-settings","title":"Tune system-wide network settings","text":"

    Set the config file path and enlarge the Linux kernel maximum buffer size before launching Autoware.

    # Increase the maximum receive buffer size for network packets\nsudo sysctl -w net.core.rmem_max=2147483647  # 2 GiB, default is 208 KiB\n\n# IP fragmentation settings\nsudo sysctl -w net.ipv4.ipfrag_time=3  # in seconds, default is 30 s\nsudo sysctl -w net.ipv4.ipfrag_high_thresh=134217728  # 128 MiB, default is 256 KiB\n

    To make it permanent,

    sudo nano /etc/sysctl.d/10-cyclone-max.conf\n

    Paste the following into the file:

    # Increase the maximum receive buffer size for network packets\nnet.core.rmem_max=2147483647  # 2 GiB, default is 208 KiB\n\n# IP fragmentation settings\nnet.ipv4.ipfrag_time=3  # in seconds, default is 30 s\nnet.ipv4.ipfrag_high_thresh=134217728  # 128 MiB, default is 256 KiB\n

    Details of each parameter here is explained in the ROS 2 documentation.

    "},{"location":"installation/additional-settings-for-developers/network-configuration/dds-settings/#validate-the-sysctl-settings","title":"Validate the sysctl settings","text":"
    user@pc$ sysctl net.core.rmem_max net.ipv4.ipfrag_time net.ipv4.ipfrag_high_thresh\nnet.core.rmem_max = 2147483647\nnet.ipv4.ipfrag_time = 3\nnet.ipv4.ipfrag_high_thresh = 134217728\n
    "},{"location":"installation/additional-settings-for-developers/network-configuration/dds-settings/#cyclonedds-configuration","title":"CycloneDDS Configuration","text":"

    Save the following file as ~/cyclonedds.xml.

    <?xml version=\"1.0\" encoding=\"UTF-8\" ?>\n<CycloneDDS xmlns=\"https://cdds.io/config\" xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\" xsi:schemaLocation=\"https://cdds.io/config https://raw.githubusercontent.com/eclipse-cyclonedds/cyclonedds/master/etc/cyclonedds.xsd\">\n<Domain Id=\"any\">\n<General>\n<Interfaces>\n<NetworkInterface autodetermine=\"false\" name=\"lo\" priority=\"default\" multicast=\"default\" />\n</Interfaces>\n<AllowMulticast>default</AllowMulticast>\n<MaxMessageSize>65500B</MaxMessageSize>\n</General>\n<Internal>\n<SocketReceiveBufferSize min=\"10MB\"/>\n<Watermarks>\n<WhcHigh>500kB</WhcHigh>\n</Watermarks>\n</Internal>\n</Domain>\n</CycloneDDS>\n

    Then add the following lines to your ~/.bashrc file.

    export RMW_IMPLEMENTATION=rmw_cyclonedds_cpp\n\nexport CYCLONEDDS_URI=file:///absolute/path/to/cyclonedds.xml\n# Replace `/absolute/path/to/cyclonedds.xml` with the actual path to the file.\n# Example: export CYCLONEDDS_URI=file:///home/user/cyclonedds.xml\n

    You can refer to Eclipse Cyclone DDS: Run-time configuration documentation for more details.

    Warning

    RMW_IMPLEMENTATION variable might be already set with Ansible/RMW Implementation.

    Check and remove the duplicate line if necessary.

    "},{"location":"installation/additional-settings-for-developers/network-configuration/dds-settings/#additional-information","title":"Additional information","text":""},{"location":"installation/additional-settings-for-developers/network-configuration/dds-settings/#about-ros_localhost_only-environment-variable","title":"About ROS_LOCALHOST_ONLY environment variable","text":"

    Previously, we used to set export ROS_LOCALHOST_ONLY=1 to enable localhost-only communication. But because of an ongoing issue, this method doesn't work.

    Warning

    Do not set export ROS_LOCALHOST_ONLY=1 in ~/.bashrc.

    If you do so, it will cause an error with RMW.

    Remove it from ~/.bashrc if you have set it.

    "},{"location":"installation/additional-settings-for-developers/network-configuration/dds-settings/#about-ros_domain_id-environment-variable","title":"About ROS_DOMAIN_ID environment variable","text":"

    We can also set export ROS_DOMAIN_ID=3(or any number 1 to 255) (0 by default) to avoid interference with other ROS 2 nodes on the same network.

    But since 255 is a very small number, it might interfere with other computers on the same network unless you make sure everyone has a unique domain ID.

    Another problem is that if someone runs a test that uses ROS 2 launch_testing framework, by default it will use a random domain ID to isolate between tests even on the same machine. See this PR for more details.

    "},{"location":"installation/additional-settings-for-developers/network-configuration/enable-multicast-for-lo/","title":"Enable `multicast` on `lo`","text":""},{"location":"installation/additional-settings-for-developers/network-configuration/enable-multicast-for-lo/#enable-multicast-on-lo","title":"Enable multicast on lo","text":""},{"location":"installation/additional-settings-for-developers/network-configuration/enable-multicast-for-lo/#manually-temporary-solution","title":"Manually (temporary solution)","text":"

    You may just call the following command to enable multicast on the loopback interface.

    sudo ip link set lo multicast on\n

    Warning

    This will be reverted once the computer restarts. To make it permanent, follow the steps below.

    Note

    Here, lo is the loopback interface.

    You can check the interfaces with ip link show.

    You may change lo with the interface you want to enable multicast on.

    "},{"location":"installation/additional-settings-for-developers/network-configuration/enable-multicast-for-lo/#on-startup-with-a-service-permanent-solution","title":"On startup with a service (permanent solution)","text":"
    sudo nano /etc/systemd/system/multicast-lo.service\n

    Paste the following into the file:

    [Unit]\nDescription=Enable Multicast on Loopback\n\n[Service]\nType=oneshot\nExecStart=/usr/sbin/ip link set lo multicast on\n\n[Install]\nWantedBy=multi-user.target\n

    Press following in order to save with nano:

    1. Ctrl+X
    2. Y
    3. Enter
    # Make it recognized\nsudo systemctl daemon-reload\n\n# Make it run on startup\nsudo systemctl enable multicast-lo.service\n\n# Start it now\nsudo systemctl start multicast-lo.service\n
    "},{"location":"installation/additional-settings-for-developers/network-configuration/enable-multicast-for-lo/#validate","title":"Validate","text":"
    you@pc:~$ sudo systemctl status multicast-lo.service\n\u25cb multicast-lo.service - Enable Multicast on Loopback\n     Loaded: loaded (/etc/systemd/system/multicast-lo.service; enabled; vendor preset: enabled)\n     Active: inactive (dead) since Mon 2024-07-08 12:54:17 +03; 4s ago\n    Process: 22588 ExecStart=/usr/bin/ip link set lo multicast on (code=exited, status=0/SUCCESS)\n   Main PID: 22588 (code=exited, status=0/SUCCESS)\n        CPU: 1ms\n\nTem 08 12:54:17 mfc-leo systemd[1]: Starting Enable Multicast on Loopback...\nTem 08 12:54:17 mfc-leo systemd[1]: multicast-lo.service: Deactivated successfully.\nTem 08 12:54:17 mfc-leo systemd[1]: Finished Enable Multicast on Loopback.\n
    you@pc:~$ ip link show lo\n1: lo: <LOOPBACK,MULTICAST,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\n    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00\n
    "},{"location":"installation/additional-settings-for-developers/network-configuration/enable-multicast-for-lo/#uninstalling-the-service","title":"Uninstalling the service","text":"

    If for some reason you want to uninstall the service, you can do so by following these steps:

    # Stop the service\nsudo systemctl stop multicast-lo.service\n\n# Disable the service from running on startup\nsudo systemctl disable multicast-lo.service\n\n# Remove the service file\nsudo rm /etc/systemd/system/multicast-lo.service\n\n# Reload systemd to apply the changes\nsudo systemctl daemon-reload\n
    "},{"location":"installation/additional-settings-for-developers/network-configuration/multiple-computers/","title":"Communicating across multiple computers with CycloneDDS","text":""},{"location":"installation/additional-settings-for-developers/network-configuration/multiple-computers/#communicating-across-multiple-computers-with-cyclonedds","title":"Communicating across multiple computers with CycloneDDS","text":""},{"location":"installation/additional-settings-for-developers/network-configuration/multiple-computers/#configuring-cyclonedds","title":"Configuring CycloneDDS","text":"

    Within the ~/cyclonedds.xml file, Interfaces section can be set in various ways to communicate across multiple computers within a network.

    "},{"location":"installation/additional-settings-for-developers/network-configuration/multiple-computers/#automatically-determine-the-network-interface-convenient","title":"Automatically determine the network interface (convenient)","text":"

    With this setting, CycloneDDS will automatically determine the most suitable network interface to use.

    <Interfaces>\n<NetworkInterface autodetermine=\"true\" priority=\"default\" multicast=\"default\" />\n</Interfaces>\n
    "},{"location":"installation/additional-settings-for-developers/network-configuration/multiple-computers/#manually-set-the-network-interface-recommended","title":"Manually set the network interface (recommended)","text":"

    With this setting, you can manually set the network interface to use.

    <Interfaces>\n<NetworkInterface autodetermine=\"false\" name=\"enp38s0\" priority=\"default\" multicast=\"default\" />\n</Interfaces>\n

    Warning

    You should replace enp38s0 with the actual network interface name.

    Note

    ifconfig command can be used to find the network interface name.

    "},{"location":"installation/additional-settings-for-developers/network-configuration/multiple-computers/#time-synchronization","title":"Time synchronization","text":"

    To ensure that the nodes on different computers are synchronized, you should synchronize the time between the computers.

    You can use the chrony to synchronize the time between computers.

    Please refer to this post for more information: Multi PC AWSIM + Autoware Tests #3813

    Warning

    Under Construction

    "},{"location":"installation/autoware/docker-installation/","title":"Open AD Kit: containerized workloads for Autoware","text":""},{"location":"installation/autoware/docker-installation/#open-ad-kit-containerized-workloads-for-autoware","title":"Open AD Kit: containerized workloads for Autoware","text":"

    Open AD Kit offers two types of Docker image to let you get started with Autoware quickly: devel and runtime.

    1. The devel image enables you to develop Autoware without setting up the local development environment.
    2. The runtime image contains only runtime executables and enables you to try out Autoware quickly.

    Info

    Before proceeding, confirm and agree with the NVIDIA Deep Learning Container license. By pulling and using the Autoware Open AD Kit images, you accept the terms and conditions of the license.

    "},{"location":"installation/autoware/docker-installation/#prerequisites","title":"Prerequisites","text":"
    • Docker
    • NVIDIA Container Toolkit (preferred)
    • NVIDIA CUDA 12 compatible GPU Driver (preferred)
    1. Clone autowarefoundation/autoware and move to the directory.

      git clone https://github.com/autowarefoundation/autoware.git\ncd autoware\n
    2. The setup script will install all required dependencies with:

      ./setup-dev-env.sh -y docker\n

      To install without NVIDIA GPU support:

      ./setup-dev-env.sh -y --no-nvidia docker\n

    Info

    GPU acceleration is required for some features such as object detection and traffic light detection/classification. For details of how to enable these features without a GPU, refer to the Running Autoware without CUDA.

    "},{"location":"installation/autoware/docker-installation/#usage","title":"Usage","text":""},{"location":"installation/autoware/docker-installation/#runtime","title":"Runtime","text":"

    You can use run.sh to run the Autoware runtime container with the map data:

    ./docker/run.sh --map-path path_to_map_data\n

    For more launch options, you can append a custom launch command instead of using the default launch command which is ros2 launch autoware_launch autoware.launch.xml.

    Here is an example of running the runtime container with a custom launch command:

    ./docker/run.sh --map-path ~/autoware_map/sample-map-rosbag ros2 launch autoware_launch planning_simulator.launch.xml map_path:=/autoware_map vehicle_model:=sample_vehicle sensor_model:=sample_sensor_kit\n

    Info

    You can use --no-nvidia to run without NVIDIA GPU support, and --headless to run without display that means no RViz visualization.

    "},{"location":"installation/autoware/docker-installation/#run-the-autoware-tutorials","title":"Run the Autoware tutorials","text":"

    Inside the container, you can run the Autoware tutorials by following these links:

    Planning Simulation

    Rosbag Replay Simulation.

    "},{"location":"installation/autoware/docker-installation/#development-environment","title":"Development environment","text":"
    ./docker/run.sh --devel\n

    Info

    By default workspace mounted on the container will be current directory(pwd), you can change the workspace path by --workspace path_to_workspace. For development environments without NVIDIA GPU support use --no-nvidia.

    "},{"location":"installation/autoware/docker-installation/#how-to-set-up-a-workspace","title":"How to set up a workspace","text":"
    1. Create the src directory and clone repositories into it.

      mkdir src\nvcs import src < autoware.repos\n

      If you are an active developer, you may also want to pull the nightly repositories, which contain the latest updates:

      vcs import src < autoware-nightly.repos\n

      \u26a0\ufe0f Note: The nightly repositories are unstable and may contain bugs. Use them with caution.

    2. Update dependent ROS packages.

      The dependencies of Autoware may have changed after the Docker image was created. In that case, you need to run the following commands to update the dependencies.

      # Make sure all ros-$ROS_DISTRO-* packages are upgraded to their latest version\nsudo apt update && sudo apt upgrade\nrosdep update\nrosdep install -y --from-paths src --ignore-src --rosdistro $ROS_DISTRO\n
    3. Build the workspace.

      colcon build --symlink-install --cmake-args -DCMAKE_BUILD_TYPE=Release\n

      If there is any build issue, refer to Troubleshooting.

    To Update the Workspace

    cd autoware\ngit pull\nvcs import src < autoware.repos\n\n# If you are using nightly repositories, also run the following command:\nvcs import src < autoware-nightly.repos\n\nvcs pull src\n# Make sure all ros-$ROS_DISTRO-* packages are upgraded to their latest version\nsudo apt update && sudo apt upgrade\nrosdep update\nrosdep install -y --from-paths src --ignore-src --rosdistro $ROS_DISTRO\n

    It might be the case that dependencies imported via vcs import have been moved/removed. VCStool does not currently handle those cases, so if builds fail after vcs import, cleaning and re-importing all dependencies may be necessary:

    rm -rf src/*\nvcs import src < autoware.repos\n# If you are using nightly repositories, import them as well.\nvcs import src < autoware-nightly.repos\n
    "},{"location":"installation/autoware/docker-installation/#using-vs-code-remote-containers-for-development","title":"Using VS Code remote containers for development","text":"

    Using the Visual Studio Code with the Remote - Containers extension, you can develop Autoware in the containerized environment with ease.

    Get the Visual Studio Code's Remote - Containers extension. And reopen the workspace in the container by selecting Remote-Containers: Reopen in Container from the Command Palette (F1).

    You can choose Autoware or Autoware-cuda image to develop with or without CUDA support.

    "},{"location":"installation/autoware/docker-installation/#building-docker-images-from-scratch","title":"Building Docker images from scratch","text":"

    If you want to build these images locally for development purposes, run the following command:

    cd autoware/\n./docker/build.sh\n

    To build without CUDA, use the --no-cuda option:

    ./docker/build.sh --no-cuda\n

    To build only development image, use the --devel-only option:

    ./docker/build.sh --devel-only\n

    To specify the platform, use the --platform option:

    ./docker/build.sh --platform linux/amd64\n./docker/build.sh --platform linux/arm64\n
    "},{"location":"installation/autoware/docker-installation/#using-docker-images-other-than-latest","title":"Using Docker images other than latest","text":"

    There are also images versioned based on the date or release tag. Use them when you need a fixed version of the image.

    The list of versions can be found here.

    "},{"location":"installation/autoware/source-installation/","title":"Source installation","text":""},{"location":"installation/autoware/source-installation/#source-installation","title":"Source installation","text":""},{"location":"installation/autoware/source-installation/#prerequisites","title":"Prerequisites","text":"
    • OS

      • Ubuntu 22.04
    • ROS

      • ROS 2 Humble

      For ROS 2 system dependencies, refer to REP-2000.

    • Git
      • Registering SSH keys to GitHub is preferable.
    sudo apt-get -y update\nsudo apt-get -y install git\n

    Note: If you wish to use ROS 2 Galactic on Ubuntu 20.04, refer to installation instruction from galactic branch, but be aware that Galactic version of Autoware might not have the latest features.

    "},{"location":"installation/autoware/source-installation/#how-to-set-up-a-development-environment","title":"How to set up a development environment","text":"
    1. Clone autowarefoundation/autoware and move to the directory.

      git clone https://github.com/autowarefoundation/autoware.git\ncd autoware\n
    2. If you are installing Autoware for the first time, you can automatically install the dependencies by using the provided Ansible script.

      ./setup-dev-env.sh\n

      If you encounter any build issues, please consult the Troubleshooting section for assistance.

    Info

    Before installing NVIDIA libraries, please ensure that you have reviewed and agreed to the licenses.

    • CUDA
    • cuDNN
    • TensorRT

    Note

    The following items will be automatically installed. If the ansible script doesn't work or if you already have different versions of dependent libraries installed, please install the following items manually.

    • Install Ansible
    • Install Build Tools
    • Install Dev Tools
    • Install gdown
    • Install geographiclib
    • Install pacmod
    • Install the RMW Implementation
    • Install ROS 2
    • Install ROS 2 Dev Tools
    • Install Nvidia CUDA
    • Install Nvidia cuDNN and TensorRT
    • Install the Autoware RViz Theme (only affects Autoware RViz)
    • Download the Artifacts (for perception inference)
    "},{"location":"installation/autoware/source-installation/#how-to-set-up-a-workspace","title":"How to set up a workspace","text":"

    Using Autoware Build GUI

    If you prefer a graphical user interface (GUI) over the command line for launching and managing your simulations, refer to the Using Autoware Build GUI section at the end of this document for a step-by-step guide.

    1. Create the src directory and clone repositories into it.

      Autoware uses vcstool to construct workspaces.

      cd autoware\nmkdir src\nvcs import src < autoware.repos\n

      If you are an active developer, you may also want to pull the nightly repositories, which contain the latest updates:

      vcs import src < autoware-nightly.repos\n

      \u26a0\ufe0f Note: The nightly repositories are unstable and may contain bugs. Use them with caution.

    2. Install dependent ROS packages.

      Autoware requires some ROS 2 packages in addition to the core components. The tool rosdep allows an automatic search and installation of such dependencies. You might need to run rosdep update before rosdep install.

      source /opt/ros/humble/setup.bash\n# Make sure all previously installed ros-$ROS_DISTRO-* packages are upgraded to their latest version\nsudo apt update && sudo apt upgrade\nrosdep update\nrosdep install -y --from-paths src --ignore-src --rosdistro $ROS_DISTRO\n
    3. Install and set up ccache to speed up consecutive builds. (optional but highly recommended)

    4. Build the workspace.

      Autoware uses colcon to build workspaces. For more advanced options, refer to the documentation.

      colcon build --symlink-install --cmake-args -DCMAKE_BUILD_TYPE=Release\n

      If there is any build issue, refer to Troubleshooting.

    5. Follow the steps in Network Configuration before running Autoware.

    6. Apply the settings recommended in Console settings for ROS 2 for a better development experience. (optional)

    "},{"location":"installation/autoware/source-installation/#how-to-update-a-workspace","title":"How to update a workspace","text":"
    1. Update the .repos file.

      cd autoware\ngit pull <remote> <your branch>\n

      <remote> is usually git@github.com:autowarefoundation/autoware.git

    2. Update the repositories.

      vcs import src < autoware.repos\n

      \u26a0\ufe0f If you are using nightly repositories, you can also update them.

      vcs import src < autoware-nightly.repos\n
      vcs pull src\n

      For Git users:

      • vcs import is similar to git checkout.
        • Note that it doesn't pull from the remote.
      • vcs pull is similar to git pull.
        • Note that it doesn't switch branches.

      For more information, refer to the official documentation.

      It might be the case that dependencies imported via vcs import have been moved/removed. VCStool does not currently handle those cases, so if builds fail after vcs import, cleaning and re-importing all dependencies may be necessary:

      rm -rf src/*\nvcs import src < autoware.repos\n

      \u26a0\ufe0f If you are using nightly repositories, import them as well.

      vcs import src < autoware-nightly.repos\n
    3. Install dependent ROS packages.

      source /opt/ros/humble/setup.bash\n# Make sure all previously installed ros-$ROS_DISTRO-* packages are upgraded to their latest version\nsudo apt update && sudo apt upgrade\nrosdep update\nrosdep install -y --from-paths src --ignore-src --rosdistro $ROS_DISTRO\n
    4. Build the workspace.

      colcon build --symlink-install --cmake-args -DCMAKE_BUILD_TYPE=Release\n
    "},{"location":"installation/autoware/source-installation/#using-autoware-build-gui","title":"Using Autoware Build GUI","text":"

    In addition to the traditional command-line methods of building Autoware packages, developers and users can leverage the Autoware Build GUI for a more streamlined and user-friendly experience. This GUI application simplifies the process of building and managing Autoware packages.

    "},{"location":"installation/autoware/source-installation/#integration-with-autoware-source-installation","title":"Integration with Autoware source installation","text":"

    When using the Autoware Build GUI in conjunction with the traditional source installation process:

    • Initial Setup: Follow the standard Autoware source installation guide to set up your environment and workspace.
    • Using the GUI: Once the initial setup is complete, you can use the Autoware Build GUI to manage subsequent builds and package updates.

    This integration offers a more accessible approach to building and managing Autoware packages, catering to both new users and experienced developers.

    "},{"location":"installation/autoware/source-installation/#getting-started-with-autoware-build-gui","title":"Getting started with Autoware Build GUI","text":"
    1. Installation: Ensure you have installed the Autoware Build GUI. Installation instructions.
    2. Launching the App: Once installed, launch the Autoware Build GUI.
    3. Setting Up: Set the path to your Autoware folder within the GUI.
    4. Building Packages: Select the Autoware packages you wish to build and manage the build process through the GUI.

      4.1. Build Configuration: Choose from a list of default build configurations, or select the packages you wish to build manually.

      4.2. Build Options: Choose which build type you wish to use, with ability to specify additional build options.

    5. Save and Load: Save your build configurations for future use, or load a previously saved configuration if you don't wish to build all packages or use one of the default configurations provided.

    6. Updating Workspace: Update your Autoware workspace's packages to the latest version using the GUI or adding Calibration tools to the workspace.
    "},{"location":"installation/related-tools/","title":"Installation of related tools","text":""},{"location":"installation/related-tools/#installation-of-related-tools","title":"Installation of related tools","text":"

    Warning

    Under Construction

    "},{"location":"models/","title":"Machine learning models","text":""},{"location":"models/#machine-learning-models","title":"Machine learning models","text":"

    The Autoware perception stack uses models for inference. These models are automatically downloaded as part of the setup-dev-env.sh script.

    The models are hosted by Web.Auto.

    Default models directory (data_dir) is ~/autoware_data.

    "},{"location":"models/#download-instructions","title":"Download instructions","text":"

    Please follow the download instruction in autoware download instructions for updated models downloading.

    The models can be also downloaded manually using download tools such as wget or curl. The latest urls of weight files and param files for each model can be found at autoware main.yaml file

    The example of downloading lidar_centerpoint model:

    # lidar_centerpoint\n\n$ mkdir -p ~/autoware_data/lidar_centerpoint/\n$ wget -P ~/autoware_data/lidar_centerpoint/ \\\nhttps://awf.ml.dev.web.auto/perception/models/centerpoint/v2/pts_voxel_encoder_centerpoint.onnx \\\nhttps://awf.ml.dev.web.auto/perception/models/centerpoint/v2/pts_backbone_neck_head_centerpoint.onnx \\\nhttps://awf.ml.dev.web.auto/perception/models/centerpoint/v2/pts_voxel_encoder_centerpoint_tiny.onnx \\\nhttps://awf.ml.dev.web.auto/perception/models/centerpoint/v2/pts_backbone_neck_head_centerpoint_tiny.onnx \\\nhttps://awf.ml.dev.web.auto/perception/models/centerpoint/v2/centerpoint_ml_package.param.yaml \\\nhttps://awf.ml.dev.web.auto/perception/models/centerpoint/v2/centerpoint_tiny_ml_package.param.yaml \\\nhttps://awf.ml.dev.web.auto/perception/models/centerpoint/v2/centerpoint_sigma_ml_package.param.yaml \\\nhttps://awf.ml.dev.web.auto/perception/models/centerpoint/v2/detection_class_remapper.param.yaml \\\nhttps://awf.ml.dev.web.auto/perception/models/centerpoint/v2/deploy_metadata.yaml\n
    "},{"location":"reference-hw/","title":"Reference HW Design","text":""},{"location":"reference-hw/#reference-hw-design","title":"Reference HW Design","text":"

    This document is created to describe and give additional information of the sensors and systems supported by Autoware.Universe software.

    All equipment listed in this document has available ROS 2 drivers and has been tested by one or more of the community members on field in autonomous vehicle and robotics applications.

    The listed sensors and systems are not sold, developed or given direct technical support by the Autoware community. Having said that any ROS 2 and Autoware related issue regarding the hardware usage could be asked using the community guidelines which found here.

    The documents consists of the sections listed below:

    • AD COMPUTERs

      • ADLINK In-Vehicle Computers
        • NXP In-Vehicle Computers
        • Neousys In-Vehicle Computers
        • Crystal Rugged In-Vehicle Computers
    • LiDARs

      • Velodyne 3D LiDAR Sensors
      • Robosense 3D LiDAR Sensors
      • HESAI 3D LiDAR Sensors
      • Leishen 3D LiDAR Sensors
      • Livox 3D LiDAR Sensors
      • Ouster 3D LiDAR Sensors
    • RADARs

      • Smartmicro Automotive Radars
      • Aptiv Automotive Radars
      • Continental Engineering Radars
    • CAMERAs

      • FLIR Machine Vision Cameras
      • Lucid Vision Cameras
      • Allied Vision Cameras
      • Tier IV Cameras
      • Neousys Technology Cameras
    • Thermal CAMERAs

      • FLIR Thermal Automotive Dev. Kit
    • IMU, AHRS & GNSS/INS

      • NovAtel GNSS/INS Sensors
      • XSens GNSS/INS & IMU Sensors
      • SBG GNSS/INS & IMU Sensors
      • Applanix GNSS/INS Sensors
      • PolyExplore GNSS/INS Sensors
      • Fix Position GNSS/INS Sensors
    • Vehicle Drive By Wire Suppliers

      • Dataspeed DBW Solutions
      • AStuff Pacmod DBW Solutions
      • Schaeffler-Paravan Space Drive DBW Solutions
    • Vehicle Platform Suppliers

      • PIX MOVING Autonomous Vehicle Solutions
      • Autonomoustuff AV Solutions
      • NAVYA AV Solutions
    • Remote Drive

      • FORT ROBOTICS
      • LOGITECH
    • Full Drivers List
    • AD Sensor Kit Suppliers

      • LEO Drive AD Sensor Kit
      • TIER IV AD Kit
      • RoboSense AD Sensor Kit
    "},{"location":"reference-hw/ad-computers/","title":"AD Computers","text":""},{"location":"reference-hw/ad-computers/#ad-computers","title":"AD Computers","text":""},{"location":"reference-hw/ad-computers/#adlink-in-vehicle-computers","title":"ADLINK In-Vehicle Computers","text":"

    ADLINK solutions which is used for autonomous driving and tested by one or more community members are listed below:

    Supported Products List CPU GPU RAM, Interfaces Environmental Autoware Tested (Y/N) AVA-3510 Intel\u00ae Xeon\u00ae E-2278GE Dual MXM RTX 5000 64GB RAM,CAN, USB, 10G Ethernet, DIO, Hot-Swap SSD, USIM 9~36 VDC, MIL-STD-810H,ISO 7637-2 Y SOAFEE\u2019s AVA Developer Platform Ampere Altra ARMv8 optional USB, Ethernet, DIO, M.2 NVMe SSDs 110/220 AC Y RQX-58G 8-core Arm Nvidia Jetson AGX Xavier USB, Ethernet, M.2 NVME SSD, CAN, USIM, GMSL2 Camera support 9~36VDC, IEC 60068-2-64: Operating 3Grms, 5-500 Hz, 3 axes Y RQX-59G 8-core Arm Nvidia Jetson AGX Orin USB, Ethernet, M.2 NVME SSD, CAN, USIM, GMSL2 Camera support 9~36VDC, IEC 60068-2-64: Operating 3Grms, 5-500 Hz, 3 axes -

    Link to company website is here.

    "},{"location":"reference-hw/ad-computers/#nxp-in-vehicle-computers","title":"NXP In-Vehicle Computers","text":"

    NXP solutions which is used for autonomous driving and tested by one or more community members are listed below:

    Supported Products List CPU GPU RAM, Interfaces Environmental Autoware Tested (Y/N) BLUEBOX 3.0 16 x Arm\u00ae Cortex\u00ae-A72 Dual RTX 8000 or RTX A6000 16 GB RAM CAN, FlexRay, USB, Ethernet, DIO, SSD ASIL-D -

    Link to company website is here.

    "},{"location":"reference-hw/ad-computers/#neousys-in-vehicle-computers","title":"Neousys In-Vehicle Computers","text":"

    Neousys solutions which is used for autonomous driving and tested by one or more community members are listed below:

    Supported Products List CPU GPU RAM, Interfaces Environmental Autoware Tested (Y/N) 8805-GC AMD\u00ae EPYC\u2122 7003 NVIDIA\u00ae RTX A6000/ A4500 512GB CAN, USB, Ethernet, Serial, Easy-Swap SSD 8-48 Volt, Vibration:MIL-STD810G, Method 514.6, Category 4 Y 10208-GC Intel\u00ae 13th/12th-Gen Core\u2122 Dual 350W NVIDIA\u00ae RTX GPU 64GB CAN, USB, Ethernet, Serial, M2 NVMe SSD 8~48 Volt, Vibration: MIL-STD-810H, Method 514.8, Category 4 Y 9160-GC Intel\u00ae 13th/12th-Gen Core\u2122 NVIDIA\u00ae RTX series up to 130W TDP 64GB CAN, USB, Ethernet, PoE, Serial, two 2.5\" SATA HDD/SSD with RAID, M2 NVMe SSD 8~48, Vibration: Volt,MIL-STD-810G, Method 514.6, Category 4 -

    Link to company website is here.

    "},{"location":"reference-hw/ad-computers/#crystal-rugged-in-vehicle-computers","title":"Crystal Rugged In-Vehicle Computers","text":"

    Crystal Rugged solutions which is used for autonomous driving and tested by one or more community members are listed below:

    Supported Products List CPU GPU RAM, Interfaces Environmental Autoware Tested (Y/N) AVC 0161-AC Intel\u00ae Xeon\u00ae Scalable Dual GPU RTX Series 2TB RAM,CAN, USB, Ethernet, Serial, Hot-Swap SSD 10-32 VoltVibration:2 G RMS 10-1000 Hz, 3 axes - AVC0403 Intel\u00ae Xeon\u00ae Scalable or AMD EPYC\u2122 Optional (5 GPU) 2TB RAM, CAN, USB, Ethernet, Serial, Hot-Swap SSD 10-32 Volt, Vibration: 2 G RMS 10-1000 Hz, 3 axes - AVC1322 Intel\u00ae Xeon\u00ae D-1718T or Gen 12/13 Core\u2122 i3/i5/i7 NVIDIA\u00ae Jetson AGX Orin 128 GB DDR4 RAM, USB, Ethernet, Serial, SATA 2.5\u201d SSD 10-36 Volt, Vibration: 5.5g, 5-2,000Hz, 60 min/axis, 3 axis - AVC1753 10th Generation Intel\u00ae Core\u2122 and Xeon\u00ae Optional (1 GPU) 128 GB DDR4 RAM, USB, Ethernet, NVMe U.2 SSD/ 3 SATA SSD 8-36 VDC/ 120-240VAC 50/60Hz, Vibration: 5.5g, 5-2,000Hz, 60 min/axis, 3 axis -

    Link to company website is here.

    "},{"location":"reference-hw/ad-computers/#vecow-in-vehicle-computers","title":"Vecow In-Vehicle Computers","text":"

    Vecow solutions which is used for autonomous driving and tested by one or more community members are listed below:

    Supported Products List CPU GPU RAM, Interfaces Environmental Autoware Tested (Y/N) ECX-3800 PEG Intel\u00ae 13th/12th-Gen Core\u2122 200W power of NVIDIA\u00ae or AMD graphics 64GB RAM, CAN, USB, Ethernet, PoE, Serial, M.2/SATA SSD, SIM Card 12-50 Volt, Vibration:MIL-STD810G, Procedure I, 20\u00b0C to 45\u00b0C - IVX-1000 Intel\u00ae 13th/12th-Gen Core\u2122 NVIDIA Quadro\u00ae MXM Graphics 64GB RAM, Ethernet, PoE, Serial, M.2/SATA/mSATA SSD, SIM Card 16-160 Volt, Vibration: IEC 61373 : 2010, 40\u00b0C to 85\u00b0C -

    Link to company website is here.

    "},{"location":"reference-hw/ad_sensor_kit_suppliers/","title":"AD Sensor Kit Suppliers","text":""},{"location":"reference-hw/ad_sensor_kit_suppliers/#ad-sensor-kit-suppliers","title":"AD Sensor Kit Suppliers","text":""},{"location":"reference-hw/ad_sensor_kit_suppliers/#leo-drive-ad-sensor-kit","title":"LEO Drive AD Sensor Kit","text":"

    LEO Drive Autonomy Essentials Kit contents are listed below:

    Supported Products List Camera Lidar GNSS/INS ROS 2 Support Autoware Tested (Y/N) Autonomy Essentials Kit 8x Lucid Vision TRI054S 4x Velodyne Puck1x Velodyne Alpha Prime1x RoboSense Bpearl 1x SBG Ellipse-D Y Y

    Link to company website: https://leodrive.ai/

    "},{"location":"reference-hw/ad_sensor_kit_suppliers/#tier-iv-ad-kit","title":"TIER IV AD Kit","text":"

    TIER IV sensor fusion system contents are listed below:

    Supported Products List Camera Lidar ECU ROS 2 Support Autoware Tested (Y/N) TIER IV ADK TIER IV C1, C2 HESAI (AT-128,XT-32)Velodyne ADLINK (RQX-58G, AVA-3510) Y Y

    Link to company website: https://sensor.tier4.jp/sensor-fusion-system

    "},{"location":"reference-hw/ad_sensor_kit_suppliers/#robosense-ad-sensor-kit","title":"RoboSense AD Sensor Kit","text":"

    RoboSense L4 sensor fusion solution system contents are listed below:

    Supported Products List Camera Lidar ECU ROS 2 Support Autoware Tested (Y/N) P6 - 4x Automotive Grade Solid-state Lidar Optional - -

    Link to company website: https://www.robosense.ai/en/rslidar/RS-Fusion-P6

    "},{"location":"reference-hw/cameras/","title":"CAMERAs","text":""},{"location":"reference-hw/cameras/#cameras","title":"CAMERAs","text":""},{"location":"reference-hw/cameras/#tier-iv-automotive-hdr-cameras","title":"TIER IV Automotive HDR Cameras","text":"

    TIER IV's Automotive HDR cameras which have ROS 2 driver and tested by TIER IV are listed below:

    Supported Products List MP FPS Interface HDR LFM Trigger /Synchronization Ingress Protection ROS 2 Driver Autoware Tested (Y/N) C1 2.5 30 GMSL2 / USB3 Y (120dB) Y Y IP69K Y Y C2 5.4 30 GMSL2 / USB3 Y (120dB) Y Y IP69K Y Y C3 (to be released in 2024) 8.3 30 GMSL2 / TBD Y (120dB) Y Y IP69K Y Y

    Link to ROS 2 driver: https://github.com/tier4/ros2_v4l2_camera

    Link to product support site: TIER IV Edge.Auto documentation

    Link to product web site: TIER IV Automotive Camera Solution

    "},{"location":"reference-hw/cameras/#flir-machine-vision-cameras","title":"FLIR Machine Vision Cameras","text":"

    FLIR Machine Vision cameras which has ROS 2 driver and tested by one or more community members are listed below:

    Supported Products List MP FPS Interface HDR LFM Trigger /Synchronization Ingress Protection ROS 2 Driver Autoware Tested (Y/N) Blackfly S 2.0 5.0 22 95 USB-GigE N/A N/A Y N/A Y - Grasshopper3 2.3 5.0 26 90 USB-GigE N/A N/A Y N/A Y -

    Link to ROS 2 driver: https://github.com/berndpfrommer/flir_spinnaker_ros2

    Link to company website: https://www.flir.eu/iis/machine-vision/

    "},{"location":"reference-hw/cameras/#lucid-vision-cameras","title":"Lucid Vision Cameras","text":"

    Lucid Vision cameras which has ROS 2 driver and tested by one or more community members are listed below:

    Supported Products List MP FPS Interface HDR LFM Trigger /Synchronization Ingress Protection ROS 2 Driver Autoware Tested (Y/N) TRITON 054S 5.4 22 GigE Y Y Y up to IP67 Y Y TRITON 032S 3.2 35.4 GigE N/A N/A Y up to IP67 Y Y

    Link to ROS 2 driver: https://gitlab.com/leo-drive/Drivers/arena_camera Link to company website: https://thinklucid.com/triton-gige-machine-vision/

    "},{"location":"reference-hw/cameras/#allied-vision-cameras","title":"Allied Vision Cameras","text":"

    Allied Vision cameras which has ROS 2 driver and tested by one or more community members are listed below:

    Supported Products List MP FPS Interface HDR LFM Trigger /Synchronization Ingress Protection ROS 2 Driver Autoware Tested (Y/N) Mako G319 3.2 37.6 GigE N/A N/A Y N/A Y -

    Link to ROS 2 driver: https://github.com/neil-rti/avt_vimba_camera

    Link to company website: https://www.alliedvision.com/en/products/camera-series/mako-g

    "},{"location":"reference-hw/cameras/#neousys-technology-camera","title":"Neousys Technology Camera","text":"

    Neousys Technology cameras which has ROS 2 driver and tested by one or more community members are listed below:

    Supported Products List MP FPS Interface Sensor Format Lens ROS 2 Driver Autoware Tested (Y/N) AC-IMX390 2.0 30 GMSL2 (over PCIe-GL26 Grabber Card) 1/2.7\u201d 5-axis active adjustment with adhesive dispense Y Y

    Link to ROS 2 driver: https://github.com/ros-drivers/gscam

    Link to company website: https://www.neousys-tech.com/en/

    "},{"location":"reference-hw/full_drivers_list/","title":"Drivers List","text":""},{"location":"reference-hw/full_drivers_list/#drivers-list","title":"Drivers List","text":"

    The list of all drivers listed above for easy access as a table with additional information:

    Type Maker Driver links License Maintainer Lidar VelodyneHesai Link Apache 2 david.wong@tier4.jpabraham.monrroy@map4.jp Lidar Velodyne Link BSD jwhitley@autonomoustuff.com Lidar Robosense Link BSD zdxiao@robosense.cn Lidar Hesai Link Apache 2 wuxiaozhou@hesaitech.com Lidar Leishen Link - - Lidar Livox Link MIT dev@livoxtech.com Lidar Ouster Link Apache 2 stevenmacenski@gmail.comtom@boxrobotics.ai Radar smartmicro Link Apache 2 opensource@smartmicro.de Radar Continental Engineering Link Apache 2 abraham.monrroy@tier4.jpsatoshi.tanaka@tier4.jp Camera Flir Link Apache 2 bernd.pfrommer@gmail.com Camera Lucid Vision Link - kcolak@leodrive.ai Camera Allied Vision Link Apache 2 at@email.com Camera Tier IV Link GPL - Camera Neousys Technology Link BSD jbo@jhu.edu GNSS NovAtel Link BSD preed@swri.org GNSS SBG Systems Link MIT support@sbg-systems.com GNSS PolyExplore Link - support@polyexplore.com"},{"location":"reference-hw/imu_ahrs_gnss_ins/","title":"IMU, AHRS & GNSS/INS","text":""},{"location":"reference-hw/imu_ahrs_gnss_ins/#imu-ahrs-gnssins","title":"IMU, AHRS & GNSS/INS","text":""},{"location":"reference-hw/imu_ahrs_gnss_ins/#novatel-gnssins-sensors","title":"NovAtel GNSS/INS Sensors","text":"

    NovAtel GNSS/INS sensors which has ROS 2 driver and tested by one or more community members are listed below:

    Supported Products List INS Rate Roll, Pitch, Yaw Acc. GNSS ROS 2 Driver\u00a0 Autoware Tested (Y/N) PwrPak7D-E2 200 Hz R (0.013\u00b0)P (0.013\u00b0)Y (0.070\u00b0) 20 HzL1 / L2 / L5 555 Channels Y - Span CPT7 200 Hz R (0.01\u00b0)\u00a0P (0.01\u00b0)\u00a0Y (0.03\u00b0) 20 Hz L1 / L2 / L5 555 Channels Y -

    Link to ROS 2 driver: https://github.com/swri-robotics/novatel_gps_driver/tree/dashing-devel

    Link to company website: https://hexagonpositioning.com/

    "},{"location":"reference-hw/imu_ahrs_gnss_ins/#xsens-gnssins-imu-sensors","title":"XSens GNSS/INS & IMU Sensors","text":"

    XSens GNSS/INS sensors which has ROS 2 driver and tested by one or more community members are listed below:

    Supported Products List INS/IMU Rate Roll, Pitch, Yaw Acc. GNSS ROS 2 Driver\u00a0 Autoware Tested (Y/N) MTi-680G 2 kHz R (0.2\u00b0)P (0.2\u00b0)Y (0.5\u00b0) 5 HzL1 / L2\u00a0184 Channels Y - MTi-300 AHRS 2 kHz R (0.2\u00b0)P (0.2\u00b0)Y (1\u00b0) Not Applicable Y -

    Link to ROS 2 driver: http://wiki.ros.org/xsens_mti_driver

    Link to company website: https://www.xsens.com/

    "},{"location":"reference-hw/imu_ahrs_gnss_ins/#sbg-gnssins-imu-sensors","title":"SBG GNSS/INS & IMU Sensors","text":"

    SBG GNSS/INS sensors which has ROS 2 driver and tested by one or more community members are listed below:

    Supported Products List INS/IMU Rate Roll, Pitch, Yaw Acc. GNSS ROS 2 Driver\u00a0 Autoware Tested (Y/N) Ellipse-D 200 Hz, 1 kHz (IMU) R (0.1\u00b0)P (0.1\u00b0)Y (0.05\u00b0) 5 HzL1 / L2184 Channels Y Y Ellipse-A (AHRS) 200 Hz, 1 kHz (IMU) R (0.1\u00b0)P (0.1\u00b0)Y (0.8\u00b0) Not Applicable Y -

    Link to ROS 2 driver: https://github.com/SBG-Systems/sbg_ros2

    Link to company website: https://www.sbg-systems.com/products/ellipse-series/

    "},{"location":"reference-hw/imu_ahrs_gnss_ins/#applanix-gnssins-sensors","title":"Applanix GNSS/INS Sensors","text":"

    SBG GNSS/INS sensors which has ROS 2 driver and tested by one or more community members are listed below:

    Supported Products List INS/IMU Rate Roll, Pitch, Yaw Acc. GNSS ROS 2 Driver\u00a0 Autoware Tested (Y/N) POSLVX 200 Hz R (0.03\u00b0)P (0.03\u00b0)Y (0.09\u00b0) L1 / L2 / L5336 Channels Y Y POSLV220 200 Hz R (0.02\u00b0)P (0.02\u00b0)Y (0.05\u00b0) L1 / L2 / L5336 Channels Y Y

    Link to ROS 2 driver: http://wiki.ros.org/applanix_driver

    Link to company website: https://www.applanix.com/products/poslv.htm

    "},{"location":"reference-hw/imu_ahrs_gnss_ins/#polyexplore-gnssins-sensors","title":"PolyExplore GNSS/INS Sensors","text":"

    PolyExplore GNSS/INS sensors which has ROS 2 driver and tested by one or more community members are listed below:

    Supported Products List INS/IMU Rate Roll, Pitch, Yaw Acc. GNSS ROS 2 Driver\u00a0 Autoware Tested (Y/N) POLYNAV 2000P 100 Hz R (0.01\u00b0)P (0.01\u00b0)Y (0.1\u00b0) L1 / L2240 Channels Y - POLYNAV 2000S 100 Hz R (0.015\u00b0)P (0.015\u00b0)Y (0.08\u00b0) L1 / L240 Channels Y -

    Link to ROS 2 driver: https://github.com/polyexplore/ROS2_Driver

    Link to company website: https://www.polyexplore.com/

    "},{"location":"reference-hw/imu_ahrs_gnss_ins/#fixposition-visual-gnssins-sensors","title":"Fixposition Visual GNSS/INS Sensors","text":"Supported Products List INS/IMU Rate Roll, Pitch, Yaw Acc. GNSS ROS 2 Driver\u00a0 Autoware Tested (Y/N) Vision-RTK 2 200Hz - 5 HzL1 / L2 Y -

    Link to ROS 2 driver: https://github.com/fixposition/fixposition_driver

    Link to company website: https://www.fixposition.com/

    Additional utilities:

    • Fixposition GNSS transformation lib: https://github.com/fixposition/fixposition_gnss_tf
    • Miscellaneous utilities (logging, software update, ...): https://github.com/fixposition/fixposition_utility
    "},{"location":"reference-hw/lidars/","title":"LIDARs","text":""},{"location":"reference-hw/lidars/#lidars","title":"LIDARs","text":""},{"location":"reference-hw/lidars/#velodyne-3d-lidar-sensors","title":"Velodyne 3D LIDAR Sensors","text":"

    Velodyne Lidars which has ROS 2 driver and tested by one or more community members are listed below:

    Supported Products List Range FOV (V), (H) ROS 2 Driver Autoware Tested (Y/N) Alpha Prime 245m (+15\u00b0)/(-25\u00b0), (360\u00b0) Y Y Ultra Puck 200m (+15\u00b0)/(-25\u00b0), (360\u00b0) Y Y Puck 100m (+15\u00b0)/(-15\u00b0), (360\u00b0) Y Y Puck Hi-res 100m (+10\u00b0)/(-10\u00b0), (360\u00b0) Y Y

    Link to ROS 2 drivers: https://github.com/tier4/nebula https://github.com/ros-drivers/velodyne/tree/ros2/velodyne_pointcloud https://gitlab.com/autowarefoundation/autoware.auto/AutowareAuto/-/tree/master/src/drivers/velodyne_nodes https://github.com/autowarefoundation/awf_velodyne/tree/tier4/universe

    Link to company website: https://velodynelidar.com/

    "},{"location":"reference-hw/lidars/#robosense-3d-lidar-sensors","title":"RoboSense 3D LIDAR Sensors","text":"

    RoboSense Lidars which has ROS 2 driver and tested by one or more community members are listed below:

    Supported Products List Range FOV (V), (H) ROS 2 Driver Autoware Tested (Y/N) M1 200m 25\u00b0/120\u00b0 - - E1 30m 90\u00b0/120\u00b0 - - Bpearl 100m 90\u00b0/360\u00b0 Y Y Ruby Plus 250m 40\u00b0/360\u00b0 Y ? Helios 32 150m 70\u00b0/360\u00b0 31\u00b0/360\u00b0 26\u00b0/360\u00b0 Y Y Helios 16 150m 30\u00b0/360\u00b0 Y ?

    Link to ROS 2 driver: https://github.com/RoboSense-LiDAR/rslidar_sdk

    Link to company website: https://www.robosense.ai/

    "},{"location":"reference-hw/lidars/#hesai-3d-lidar-sensors","title":"HESAI 3D LIDAR Sensors","text":"

    Hesai Lidars which has ROS 2 driver and tested by one or more community members are listed below:

    Supported Products List Range FOV (V), (H) ROS 2 Driver Autoware Tested (Y/N) Pandar 128 200m (+15\u00b0)/(-25\u00b0), (360\u00b0) Y - Pandar 64 200m (+15\u00b0)/(-25\u00b0), (360\u00b0) Y Y Pandar 40P 200m (+15\u00b0)/(-25\u00b0), (360\u00b0) Y Y QT 128 50m (-52.6\u00b0/+52.6\u00b0), (360\u00b0) Y Y QT 64 20m (-52.1\u00b0/+52.1\u00b0), (360\u00b0) Y Y AT128 200m (25.4\u00b0), (120\u00b0) Y Y XT32 120m (-16\u00b0/+15\u00b0), (360\u00b0) Y Y XT16 120m (-15\u00b0/+15\u00b0), (360\u00b0) Y - FT120 100m (75\u00b0), (100\u00b0) - - ET25 250m (25\u00b0), (120\u00b0) - -

    Link to ROS 2 drivers: https://github.com/tier4/nebula https://github.com/HesaiTechnology/HesaiLidar_General_ROS

    Link to company website: https://www.hesaitech.com/en/

    "},{"location":"reference-hw/lidars/#leishen-3d-lidar-sensors","title":"Leishen 3D LIDAR Sensors","text":"

    Leishen Lidars which has ROS 2 driver and tested by one or more community members are listed below:

    Supported Products List Range FOV (V), (H) ROS 2 Driver Autoware Tested (Y/N) LS C16 150m (+15\u00b0/-15\u00b0), (360\u00b0) Y - LS C32\u00a0 150m (+15\u00b0/-15\u00b0), (360\u00b0) Y - CH 32 120m (+3.7\u00b0/-6.7\u00b0),(120\u00b0) Y - CH 128 20m (+14\u00b0/-17\u00b0)/(150\u00b0) Y - C32W 160m (+15\u00b0/-55\u00b0), (360\u00b0) Y -

    Link to ROS 2 driver: https://github.com/leishen-lidar

    Link to company website: http://www.lslidar.com/

    "},{"location":"reference-hw/lidars/#livox-3d-lidar-sensors","title":"Livox 3D LIDAR Sensors","text":"

    Livox Lidars which has ROS 2 driver and tested by one or more community members are listed below:

    Supported Products List Range FOV (V), (H) ROS 2 Driver Autoware Tested (Y/N) Horizon 260m (81.7\u00b0), (25.1\u00b0) Y Y Mid-40 260m (38.4\u00b0), Circular Y - Mid-70 90m (70.4\u00b0), (77.2\u00b0) Y - Mid-100 260m (38.4\u00b0), (98.4\u00b0) Y - Mid-360 70m (+52\u00b0/-7\u00b0), (360\u00b0) Y - Avia 190m (70.4\u00b0), Circular Y - HAP 150m (25\u00b0), (120\u00b0) - - Tele-15 320m (16.2\u00b0), (14.5\u00b0) - -

    Link to ROS 2 driver: https://github.com/Livox-SDK/livox_ros2_driver

    Link to company website: https://www.livoxtech.com/

    "},{"location":"reference-hw/lidars/#ouster-3d-lidar-sensors","title":"Ouster 3D LIDAR Sensors","text":"

    Ouster Lidars which has ROS 2 driver and tested by one or more community members are listed below:

    Supported Products List Range FOV (V), (H) ROS 2 Driver Autoware Tested (Y/N) OSDome 45m (180\u00b0), (360\u00b0) Y - OS0 100m (90\u00b0), (360\u00b0) Y - OS1 200m (45\u00b0), (360\u00b0) Y - OS2 400m (22,5\u00b0), (360\u00b0) Y Y

    Link to ROS 2 driver: https://github.com/ros-drivers/ros2_ouster_drivers

    Link to company website: https://ouster.com/

    "},{"location":"reference-hw/radars/","title":"RADARs","text":""},{"location":"reference-hw/radars/#radars","title":"RADARs","text":""},{"location":"reference-hw/radars/#smartmicro-automotive-radars","title":"Smartmicro Automotive Radars","text":"

    Smartmicro Radars which has ROS 2 driver and tested by one or more community members are listed below:

    Supported Products List Range FOV (Azimuth), (Elevation) ROS 2 Driver Autoware Tested (Y/N) DRVEGRD 152 (Dual Mode Medium, Long) M: 0.33...66 m L: 0.9\u2026180 m (100\u00b0), (20\u00b0) Y - DRVEGRD 169 (Ultra-Short, Short, Medium, Long) US: 0.1\u20269.5 m S: 0.2\u202619 m M: 0.6...56 m L: 1.3...130 m US: (140\u00b0), (28\u00b0) S/M/L: (130\u00b0), (15\u00b0) Y - DRVEGRD 171 (Triple Mode Short, Medium Long) S: 0.2...40 m M: 0.5...100 m L: 1.2...240 m (100\u00b0), (20\u00b0) Y -

    Link to ROS 2 driver: https://github.com/smartmicro/smartmicro_ros2_radars

    Link to company website: https://www.smartmicro.com/automotive-radar

    "},{"location":"reference-hw/radars/#aptiv-automotive-radars","title":"Aptiv Automotive Radars","text":"

    Aptiv Radars which has ROS 2 driver and tested by one or more community members are listed below:

    Supported Products List Range FOV (Azimuth), (Elevation) ROS 2 Driver Autoware Tested (Y/N) Aptiv MMR (Dual Mode Short, Long) S: 1...40 m L: 3...160 m Short.: (90), (90\u00b0) Long: (90\u00b0), (90\u00b0) Y - Aptiv ESR 2.5 (Dual Mode (Medium, Long)) M: 1...60 m L: 1...175 m Med.: (90\u00b0), (4.4\u00b0) Long: (20\u00b0), (4.4\u00b0) Y -

    Link to company website: https://autonomoustuff.com/products

    "},{"location":"reference-hw/radars/#continental-engineering-radars","title":"Continental Engineering Radars","text":"

    Continental Engineering Radars which has ROS 2 driver and tested by one or more community members are listed below:

    Supported Products List Range FOV (Azimuth), (Elevation) ROS 2 Driver Autoware Tested (Y/N) ARS404 Near: 70m Far: 170m Near: (90\u00b0), (18\u00b0) Far: (18\u00b0), (18\u00b0) - - ARS408 Near: 20m Far: 250m Near: (120\u00b0), (20\u00b0) Far: (18\u00b0), (14\u00b0) - -

    Link to ROS 2 driver: https://github.com/tier4/ars408_driver

    Link to company website: https://conti-engineering.com/components/ars430/

    "},{"location":"reference-hw/remote_drive/","title":"Remote Drive","text":""},{"location":"reference-hw/remote_drive/#remote-drive","title":"Remote Drive","text":""},{"location":"reference-hw/remote_drive/#fort-robotics","title":"FORT ROBOTICS","text":"

    Fort Robotics remote control & E-stop devices which are used for autonomous driving and tested by one or more community members are listed below:

    Supported Products Op.Frequency Controller ROS 2 Support Autoware Tested (Y/N) Vehicle Safety Controller with E-stop 900 Mhz radio: up to 2km LOS2.4Ghz radio: up to 500m LOS IP 66 EnclosureBuilt-in emergency stop safety control(2) 2-axis joysticks(2) 1-axis finger sticks(8) buttons - -

    Link to company website: https://fortrobotics.com/vehicle-safety-controller/

    "},{"location":"reference-hw/remote_drive/#logitech","title":"LOGITECH","text":"

    Logitech joysticks which are used for autonomous driving and tested by one or more community members are listed below:

    Supported Products Op.Frequency Controller ROS 2 Support Autoware Tested (Y/N) Logitech F-710 2.4 GHz Wireless, 10m range (2) 2-axis joysticks (18) buttons Y Y

    Link to ROS driver: http://wiki.ros.org/joy

    Link to company website: https://www.logitechg.com/en-us/products/gamepads/f710-wireless-gamepad.html

    "},{"location":"reference-hw/thermal_cameras/","title":"Thermal CAMERAs","text":""},{"location":"reference-hw/thermal_cameras/#thermal-cameras","title":"Thermal CAMERAs","text":""},{"location":"reference-hw/thermal_cameras/#flir-thermal-automotive-dev-kit","title":"FLIR Thermal Automotive Dev. Kit","text":"

    FLIR ADK Thermal Vision cameras which has ROS 2 driver and tested by one or more community members are listed below:

    Supported Products List MP FPS Interface Spectral Band FOV ROS 2 Driver Autoware Tested (Y/N) FLIR ADK 640x512 30 USB-GMSL,Ethernet 8-14 um (LWIR) 75\u02da, 50\u02da, 32\u02da, and 24\u02da - -"},{"location":"reference-hw/vehicle_drive_by_wire_suppliers/","title":"Vehicle Drive By Wire Suppliers","text":""},{"location":"reference-hw/vehicle_drive_by_wire_suppliers/#vehicle-drive-by-wire-suppliers","title":"Vehicle Drive By Wire Suppliers","text":""},{"location":"reference-hw/vehicle_drive_by_wire_suppliers/#dataspeed-dbw-solutions","title":"Dataspeed DBW Solutions","text":"

    Dataspeed DBW Controllers which is used for autonomous driving and tested by one or more community members are listed below:

    Supported Vehicles Power Remote Control ROS 2 Support Autoware Tested (Y/N) Lincoln MKZ, NautilusFord Fusion, F150, Transit Connect, RangerChrysler PacificaJeep CherokeePolaris GEM, RZR, Lincoln Aviator, Jeep Grand Cherokee 12 Channel PDS,15 A Each at 12 V Optional, Available Y -

    Link to company website: https://www.dataspeedinc.com/

    "},{"location":"reference-hw/vehicle_drive_by_wire_suppliers/#astuff-pacmod-dbw-solutions","title":"AStuff Pacmod DBW Solutions","text":"

    Autonomous Stuff Pacmod DBW Controllers which is used for autonomous driving and tested by one or more community members are listed below:

    Supported Vehicles Power Remote Control ROS 2 Support Autoware Tested (Y/N) Polaris GEM SeriesPolaris eLXD MY 2016+Polaris Ranger X900International ProStarLexus RX-450h MYFord RangerToyota MinivanFord TransitHonda CR-V Power distribution panel Optional, Available Y Y

    Link to company website: https://autonomoustuff.com/platform/pacmod

    "},{"location":"reference-hw/vehicle_drive_by_wire_suppliers/#schaeffler-paravan-space-drive-dbw-solutions","title":"Schaeffler-Paravan Space Drive DBW Solutions","text":"

    Schaeffler-Paravan Space Drive DBW Controllers which is used for autonomous driving and tested by one or more community members are listed below:

    Supported Vehicles Power Remote Control ROS 2 Support Autoware Tested (Y/N) Custom Integration with Actuators - Optional, Available Y Y

    Link to company website: https://www.schaeffler-paravan.de/en/products/space-drive-system/

    "},{"location":"reference-hw/vehicle_platform_suppliers/","title":"Vehicle Platform Suppliers","text":""},{"location":"reference-hw/vehicle_platform_suppliers/#vehicle-platform-suppliers","title":"Vehicle Platform Suppliers","text":""},{"location":"reference-hw/vehicle_platform_suppliers/#pix-moving-autonomous-vehicle-solutions","title":"PIX MOVING Autonomous Vehicle Solutions","text":"

    PIX Moving AV solutions which is used for autonomous development and tested by one or more community members are listed below:

    Vehicle Types Sensors Integrated Autoware Installed ROS 2 Support Autoware Tested (Y/N) Electric DBW Chassis and Platforms Y Y Y -

    Link to company website: https://www.pixmoving.com/pixkit

    Different sizes of platforms

    "},{"location":"reference-hw/vehicle_platform_suppliers/#autonomoustuff-av-solutions","title":"Autonomoustuff AV Solutions","text":"

    Autonomoustuff platform solutions which is used for autonomous development and tested by one or more community members are listed below:

    Vehicle Types Sensors Integrated Autoware Installed ROS 2 Support Autoware Tested (Y/N) Road Vehicles, Golf Carts & Trucks Y Y Y -

    Link to company website: https://autonomoustuff.com/platform

    "},{"location":"support/","title":"Support","text":""},{"location":"support/#support","title":"Support","text":"

    This page explains several support resources.

    • Support guidelines pages explain the support mechanisms and guidelines.
    • Troubleshooting pages explain solutions for common issues.
    • Docs guide pages explain related documentation sites.
    "},{"location":"support/docs-guide/","title":"Docs guide","text":""},{"location":"support/docs-guide/#docs-guide","title":"Docs guide","text":"

    This page explains several documentation sites that are useful for Autoware and ROS development.

    • The Autoware Foundation is the official site of the Autoware Foundation. You can learn about the Autoware community here.
    • Autoware Documentation (this site) is the central documentation site for Autoware maintained by the Autoware community. General software-related information of Autoware is aggregated here.
    • Autoware Universe Documentation has READMEs and design documents of software components.
    • ROS Docs Guide explains the ROS 1 and ROS 2 documentation infrastructure.
    "},{"location":"support/support-guidelines/","title":"Support guidelines","text":""},{"location":"support/support-guidelines/#support-guidelines","title":"Support guidelines","text":"

    This page explains the support mechanisms we provide.

    Warning

    Before asking for help, search and read this documentation site carefully. Also, follow the discussion guidelines for discussions.

    Choose appropriate resources depending on what kind of help you need and read the detailed description in the sections below.

    • Documentation sites
      • Gathering information
    • GitHub Discussions
      • Questions or unconfirmed bugs -> Q&A
      • Feature requests
      • Design discussions
    • GitHub Issues
      • Confirmed bugs
      • Confirmed tasks
    • Discord
      • Instant messaging between contributors
    • ROS Discourse
      • General topics that should be widely announced
    "},{"location":"support/support-guidelines/#guidelines-for-autoware-community-support","title":"Guidelines for Autoware community support","text":"

    If you encounter a problem with Autoware, please follow these steps to seek help:

    "},{"location":"support/support-guidelines/#1-search-for-existing-issues-and-questions","title":"1. Search for existing Issues and Questions","text":"

    Before creating a new issue or question, check if someone else has already reported or asked about the problem. Use the following resources:

    • Issues

      Note that Autoware has multiple repositories listed in autoware.repos. It is recommended to search across all repositories.

    • Questions
    "},{"location":"support/support-guidelines/#2-create-a-new-question-thread","title":"2. Create a new question thread","text":"

    If you don't find an existing issue or question that addresses your problem, create a new question thread:

    • Ask a Question

      If your question is not answered within a week, mention @autoware-maintainers in a post to remind them.

    "},{"location":"support/support-guidelines/#3-participate-in-other-discussions","title":"3. Participate in other discussions","text":"

    You are also welcome to open or join discussions in other categories:

    • Feature requests
    • Design discussions
    "},{"location":"support/support-guidelines/#additional-resources","title":"Additional resources","text":"

    If you are unsure how to create a discussion, refer to the GitHub Docs on creating a new discussion.

    "},{"location":"support/support-guidelines/#documentation-sites","title":"Documentation sites","text":"

    Docs guide shows the list of useful documentation sites. Visit them and see if there is any information related to your problem.

    Note that the documentation sites aren't always up-to-date and perfect. If you find out that some information is wrong, unclear, or missing in Autoware docs, feel free to submit a pull request following the contribution guidelines.

    "},{"location":"support/support-guidelines/#github-discussions","title":"GitHub Discussions","text":"

    GitHub discussions page is the primary place for asking questions and discussing topics related to Autoware.

    Category Description Announcements Official updates and news from the Autoware maintainers Design Discussions on Autoware system and software design Feature requests Suggestions for new features and improvements General General discussions about Autoware Ideas Brainstorming and sharing innovative ideas Polls Community polls and surveys Q&A Questions and answers from the community and developers Show and tell Showcase of projects and achievements TSC meetings Minutes and discussions from TSC(Technical Steering Committee) meetings Working group activities Updates on working group activities Working group meetings Minutes and discussions from working group meetings

    Warning

    GitHub Discussions is not the right place to track tasks or bugs. Use GitHub Issues for that purpose.

    "},{"location":"support/support-guidelines/#github-issues","title":"GitHub Issues","text":"

    GitHub Issues is the designated platform for tracking confirmed bugs, tasks, and enhancements within Autoware's various repositories.

    Follow these guidelines to ensure efficient issue tracking and resolution:

    "},{"location":"support/support-guidelines/#reporting-bugs","title":"Reporting bugs","text":"

    If you encounter a confirmed bug, please report it by creating an issue in the appropriate Autoware repository. Include detailed information such as steps to reproduce, expected outcomes, and actual results to assist maintainers in addressing the issue promptly.

    "},{"location":"support/support-guidelines/#tracking-tasks","title":"Tracking tasks","text":"

    GitHub Issues is also the place for managing tasks including:

    • Refactoring: Propose refactoring existing code to improve efficiency, readability, or maintainability. Clearly describe what and why you propose to refactor.
    • New Features: If you have confirmed the need for a new feature through discussions, use Issues to track its development. Outline the feature's purpose, potential designs, and its intended impact.
    • Documentation: Propose changes to documentation to fix inaccuracies, update outdated content, or add new sections. Specify what changes are needed and why they are important.
    "},{"location":"support/support-guidelines/#creating-an-issue","title":"Creating an issue","text":"

    When creating a new issue, use the following guidelines:

    1. Choose the Correct Repository: If unsure which repository is appropriate, start a discussion in the Q&A category to seek guidance from maintainers.
    2. Use Clear, Concise Titles: Clearly summarize the issue or task in the title for quick identification.
    3. Provide Detailed Descriptions: Include all necessary details to understand the context and scope of the issue. Attach screenshots, error logs, and code snippets where applicable.
    4. Tag Relevant Contributors: Mention contributors or teams that might be impacted by or interested in the issue.
    "},{"location":"support/support-guidelines/#linking-issues-and-pull-requests","title":"Linking issues and pull requests","text":"

    When you start working on an issue, link the related pull request to the issue by mentioning the issue number. This helps maintain a clear and traceable development history.

    For more details, see the Pull Request Guidelines page.

    Warning

    GitHub Issues is not for questions or unconfirmed bugs. If an issue is created for such purposes, it will likely be transferred to GitHub Discussions for further clarification.

    "},{"location":"support/support-guidelines/#discord","title":"Discord","text":"

    Autoware has a Discord server for casual communication between contributors.

    The Autoware Discord server is a good place for the following activities:

    • Introduce yourself to the community.
    • Chat with contributors.
    • Take a quick straw poll.

    Note that it is not the right place to get help for your issues.

    "},{"location":"support/support-guidelines/#ros-discourse","title":"ROS Discourse","text":"

    If you want to widely discuss a topic with the general Autoware and ROS community or ask a question not related to Autoware's bugs, post to the Autoware category on ROS Discourse.

    Warning

    Do not post questions about bugs to ROS Discourse!

    "},{"location":"support/troubleshooting/","title":"Troubleshooting","text":""},{"location":"support/troubleshooting/#troubleshooting","title":"Troubleshooting","text":""},{"location":"support/troubleshooting/#setup-issues","title":"Setup issues","text":""},{"location":"support/troubleshooting/#cuda-related-errors","title":"CUDA-related errors","text":"

    When installing CUDA, errors may occur because of version conflicts. To resolve these types of errors, try one of the following methods:

    • Unhold all CUDA-related libraries and rerun the setup script.

      sudo apt-mark unhold  \\\n\"cuda*\"             \\\n\"libcudnn*\"         \\\n\"libnvinfer*\"       \\\n\"libnvonnxparsers*\" \\\n\"libnvparsers*\"     \\\n\"tensorrt*\"         \\\n\"nvidia*\"\n\n./setup-dev-env.sh\n
    • Uninstall all CUDA-related libraries and rerun the setup script.

      sudo apt purge        \\\n\"cuda*\"             \\\n\"libcudnn*\"         \\\n\"libnvinfer*\"       \\\n\"libnvonnxparsers*\" \\\n\"libnvparsers*\"     \\\n\"tensorrt*\"         \\\n\"nvidia*\"\n\nsudo apt autoremove\n\n./setup-dev-env.sh\n

    Warning

    Note that this may break your system and run carefully.

    • Run the setup script without installing CUDA-related libraries.

      ./setup-dev-env.sh --no-nvidia\n

    Warning

    Note that some components in Autoware Universe require CUDA, and only the CUDA version in the env file is supported at this time. Autoware may work with other CUDA versions, but those versions are not supported and functionality is not guaranteed.

    "},{"location":"support/troubleshooting/#build-issues","title":"Build issues","text":""},{"location":"support/troubleshooting/#insufficient-memory","title":"Insufficient memory","text":"

    Building Autoware requires a lot of memory, and your machine can freeze or crash if memory runs out during a build. To avoid this problem, 16-32GB of swap should be configured.

    # Optional: Check the current swapfile\nfree -h\n\n# Remove the current swapfile\nsudo swapoff /swapfile\nsudo rm /swapfile\n\n# Create a new swapfile\nsudo fallocate -l 32G /swapfile\nsudo chmod 600 /swapfile\nsudo mkswap /swapfile\nsudo swapon /swapfile\n\n# Optional: Check if the change is reflected\nfree -h\n

    For more detailed configuration steps, along with an explanation of swap, refer to Digital Ocean's \"How To Add Swap Space on Ubuntu 20.04\" tutorial

    If there are too many CPU cores (more than 64) in your machine, it might requires larger memory. A workaround here is to limit the job number while building.

    MAKEFLAGS=\"-j4\" colcon build --symlink-install --cmake-args -DCMAKE_BUILD_TYPE=Release\n

    You can adjust -j4 to any number based on your system. For more details, see the manual page of GNU make.

    By reducing the number of packages built in parallel, you can also reduce the amount of memory used. In the following example, the number of packages built in parallel is set to 1, and the number of jobs used by make is limited to 1.

    MAKEFLAGS=\"-j1\" colcon build --symlink-install --cmake-args -DCMAKE_BUILD_TYPE=Release --parallel-workers 1\n

    Note

    By lowering both the number of packages built in parallel and the number of jobs used by make, you can reduce the memory usage. However, this also means that the build process takes longer.

    "},{"location":"support/troubleshooting/#errors-when-using-the-latest-version-of-autoware","title":"Errors when using the latest version of Autoware","text":"

    If you are working with the latest version of Autoware, issues can occur due to out-of-date software or old build files.

    To resolve these types of problems, first try cleaning your build artifacts and rebuilding:

    rm -rf build/ install/ log/\ncolcon build --symlink-install --cmake-args -DCMAKE_BUILD_TYPE=Release\n

    If the error is not resolved, remove src/ and update your workspace according to installation type (Docker / source).

    Warning

    Before removing src/, confirm that there are no modifications in your local environment that you want to keep!

    If errors still persist after trying the steps above, delete the entire workspace, clone the repository once again and restart the installation process.

    rm -rf autoware/\ngit clone https://github.com/autowarefoundation/autoware.git\n
    "},{"location":"support/troubleshooting/#errors-when-using-a-fixed-version-of-autoware","title":"Errors when using a fixed version of Autoware","text":"

    In principle, errors should not occur when using a fixed version. That said, possible causes include:

    • ROS 2 has been updated with breaking changes.
      • For confirmation, check the Packaging and Release Management tag on ROS Discourse.
    • Your local environment is broken.
      • Confirm your .bashrc file, environment variables, and library versions.

    In addition to the causes listed above, there are two common misunderstandings around the use of fixed versions.

    1. You used a fixed version for autowarefoundation/autoware only. All of the repository versions in the .repos file must be specified in order to use a completely fixed version.

    2. You didn't update the workspace after changing the branch of autowarefoundation/autoware. Changing the branch of autowarefoundation/autoware does not affect the files under src/. You have to run the vcs import command to update them.

    "},{"location":"support/troubleshooting/#error-when-building-python-package","title":"Error when building python package","text":"

    During building the following issue can occurs

    pkg_resources.extern.packaging.version.InvalidVersion: Invalid version: '0.23ubuntu1'\n

    The error is due to the fact that for versions between 66.0.0 and 67.5.0 setuptools enforces the python packages to be PEP-440 conformant. Since version 67.5.1 setuptools has a fallback that makes it possible to work with old packages again.

    The solution is to update setuptools to the newest version with the following command

    pip install --upgrade setuptools\n
    "},{"location":"support/troubleshooting/#dockerrocker-issues","title":"Docker/rocker issues","text":"

    If any errors occur when running Autoware with Docker or rocker, first confirm that your Docker installation is working correctly by running the following commands:

    docker run --rm -it hello-world\ndocker run --rm -it ubuntu:latest\n

    Next, confirm that you are able to access the base Autoware image that is stored on the GitHub Packages website

    docker run --rm -it ghcr.io/autowarefoundation/autoware-universe:latest\n
    "},{"location":"support/troubleshooting/#runtime-issues","title":"Runtime issues","text":""},{"location":"support/troubleshooting/#performance-related-issues","title":"Performance related issues","text":"

    Symptoms:

    • Autoware is running slower than expected
    • Messages show up late in RViz2
    • Point clouds are lagging
    • Camera images are lagging behind
    • Point clouds or markers flicker on RViz2
    • When multiple subscribers use the same publishers, the message rate drops

    If you have any of these symptoms, please the Performance Troubleshooting page.

    "},{"location":"support/troubleshooting/#map-does-not-display-when-running-the-planning-simulator","title":"Map does not display when running the Planning Simulator","text":"

    When running the Planning Simulator, the most common reason for the map not being displayed in RViz is because the map path has not been specified correctly in the launch command. You can confirm if this is the case by searching for Could not find lanelet map under {path-to-map-dir}/lanelet2_map.osm errors in the log.

    Another possible reason is that map loading is taking a long time due to poor DDS performance. For this, please visit the Performance Troubleshooting page.

    "},{"location":"support/troubleshooting/#died-process-issues","title":"Died process issues","text":"

    Some modules may not be launched properly at runtime, and you may see \"process has died\" in your terminal. You can use the gdb tool to locate where the problem is occurring.

    Debug build the module you wish to analyze under your autoware workspace

    colcon build --cmake-args -DCMAKE_BUILD_TYPE=Debug --packages-up-to <the modules you wish to analyze> --catkin-skip-building-tests --symlink-install\n

    In this state, when a died process occurs when you run the autoware again, a core file will be created. Remeber to remove the size limit of the core file.

    ulimit -c unlimited\n

    Rename the core file as core.<PID>.

    echo core | sudo tee /proc/sys/kernel/core_pattern\necho -n 1 | sudo tee /proc/sys/kernel/core_uses_pid\n

    Launch the autoware again. When a died process occurs, a core file will be created. ll -ht helps you to check if it was created.

    Invoke gdb tool.

    gdb <executable file> <core file>\n#You can find the `<executable file>` in the error message.\n

    bt backtraces the stack of callbacks where a process dies. f <frame number> shows you the detail of a frame, and l shows you the code.

    "},{"location":"support/troubleshooting/performance-troubleshooting/","title":"Performance Troubleshooting","text":""},{"location":"support/troubleshooting/performance-troubleshooting/#performance-troubleshooting","title":"Performance Troubleshooting","text":"

    Overall symptoms:

    • Autoware is running slower than expected
    • Messages show up late in RViz2
    • Point clouds are lagging
    • Camera images are lagging behind
    • Point clouds or markers flicker on RViz2
    • When multiple subscribers use the same publishers, the message rate drops
    "},{"location":"support/troubleshooting/performance-troubleshooting/#diagnostic-steps","title":"Diagnostic Steps","text":""},{"location":"support/troubleshooting/performance-troubleshooting/#check-if-multicast-is-enabled","title":"Check if multicast is enabled","text":""},{"location":"support/troubleshooting/performance-troubleshooting/#target-symptoms","title":"Target symptoms","text":"
    • When multiple subscribers use the same publishers, the message rate drops
    "},{"location":"support/troubleshooting/performance-troubleshooting/#diagnosis","title":"Diagnosis","text":"

    Make sure that the multicast is enabled for your interface.

    For example when you run following:

    source /opt/ros/humble/setup.bash\nros2 run demo_nodes_cpp talker\n

    If you get the error message selected interface \"{your-interface-name}\" is not multicast-capable: disabling multicast, this should be fixed.

    "},{"location":"support/troubleshooting/performance-troubleshooting/#solution","title":"Solution","text":"

    Follow DDS settings for ROS 2 and Autoware

    Especially the Enable multicast on lo section.

    "},{"location":"support/troubleshooting/performance-troubleshooting/#check-the-compilation-flags","title":"Check the compilation flags","text":""},{"location":"support/troubleshooting/performance-troubleshooting/#target-symptoms_1","title":"Target symptoms","text":"
    • Autoware is running slower than expected
    • Point clouds are lagging
    • When multiple subscribers use the same publishers, the message rate drops even further
    "},{"location":"support/troubleshooting/performance-troubleshooting/#diagnosis_1","title":"Diagnosis","text":"

    Check the ~/.bash_history file to see if there are any colcon build directives without -DCMAKE_BUILD_TYPE=Release or -DCMAKE_BUILD_TYPE=RelWithDebInfo flags at all.

    Even if a build starts with these flags but same workspace gets compiled without these flags, it will still be a slow build in the end.

    In addition, the nodes will run slow in general, especially the pointcloud_preprocessor nodes.

    Example issue: issue2597

    "},{"location":"support/troubleshooting/performance-troubleshooting/#solution_1","title":"Solution","text":"
    • Remove the build, install and optionally log folders in the main autoware folder.
    • Compile the Autoware with either Release or RelWithDebInfo tags:

      colcon build --symlink-install --cmake-args -DCMAKE_BUILD_TYPE=Release\n# Or build with debug flags too (comparable performance but you can debug too)\ncolcon build --symlink-install --cmake-args -DCMAKE_BUILD_TYPE=RelWithDebInfo\n
    "},{"location":"support/troubleshooting/performance-troubleshooting/#check-the-dds-settings","title":"Check the DDS settings","text":""},{"location":"support/troubleshooting/performance-troubleshooting/#target-symptoms_2","title":"Target symptoms","text":"
    • Autoware is running slower than expected
    • Messages show up late in RViz2
    • Point clouds are lagging
    • Camera images are lagging behind
    • When multiple subscribers use the same publishers, the message rate drops
    "},{"location":"support/troubleshooting/performance-troubleshooting/#check-the-rmw-ros-middleware-implementation","title":"Check the RMW (ROS Middleware) implementation","text":""},{"location":"support/troubleshooting/performance-troubleshooting/#diagnosis_2","title":"Diagnosis","text":"

    Run following to check the middleware used:

    echo $RMW_IMPLEMENTATION\n

    The return line should be rmw_cyclonedds_cpp. If not, apply the solution.

    If you are using a different DDS middleware, we might not have official support for it just yet.

    "},{"location":"support/troubleshooting/performance-troubleshooting/#solution_2","title":"Solution","text":"

    Add export RMW_IMPLEMENTATION=rmw_cyclonedds_cpp as a separate line in you ~/.bashrc file.

    More details in: CycloneDDS Configuration

    "},{"location":"support/troubleshooting/performance-troubleshooting/#check-if-the-cyclonedds-is-configured-correctly","title":"Check if the CycloneDDS is configured correctly","text":""},{"location":"support/troubleshooting/performance-troubleshooting/#diagnosis_3","title":"Diagnosis","text":"

    Run following to check the configuration .xml file of the CycloneDDS:

    echo $CYCLONEDDS_URI\n

    The return line should be a valid path pointing to an .xml file with CycloneDDS configuration.

    Also check if the file is configured correctly:

    cat ${CYCLONEDDS_URI#file://}\n

    This should print the .xml file on the terminal.

    "},{"location":"support/troubleshooting/performance-troubleshooting/#solution_3","title":"Solution","text":"

    Follow CycloneDDS Configuration and make sure:

    • you have export CYCLONEDDS_URI=file:///absolute_path_to_your/cyclonedds.xml as a line on your ~/.bashrc file.
    • you have the cyclonedds.xml with the configuration provided in the documentation.
    "},{"location":"support/troubleshooting/performance-troubleshooting/#check-the-linux-kernel-maximum-buffer-size","title":"Check the Linux kernel maximum buffer size","text":""},{"location":"support/troubleshooting/performance-troubleshooting/#diagnosis_4","title":"Diagnosis","text":"

    Validate the sysctl settings

    "},{"location":"support/troubleshooting/performance-troubleshooting/#solution_4","title":"Solution","text":"

    Tune system-wide network settings

    "},{"location":"support/troubleshooting/performance-troubleshooting/#check-if-localhost-only-communication-for-dds-is-enabled","title":"Check if localhost only communication for DDS is enabled","text":"
    • If you are using multi computer setup, please skip this check.
    • Enabling localhost only communication for DDS can help improve the performance of ROS by reducing network traffic and avoiding potential conflicts with other devices on the network.
    "},{"location":"support/troubleshooting/performance-troubleshooting/#target-symptoms_3","title":"Target symptoms","text":"
    • You see topics that shouldn't exist
    • You see point clouds that don't belong to your machine
      • They might be from another computer running ROS 2 on your network
    • Point clouds or markers flicker on RViz2
      • Another publisher (on another machine) may be publishing on the same topic as your node does.
      • Causing the flickering.
    "},{"location":"support/troubleshooting/performance-troubleshooting/#diagnosis_5","title":"Diagnosis","text":"

    Run:

    cat ${CYCLONEDDS_URI#file://}\n

    And it should return DDS settings for ROS 2 and Autoware: CycloneDDS Configuration this file.

    "},{"location":"support/troubleshooting/performance-troubleshooting/#solution_5","title":"Solution","text":"

    Follow DDS settings for ROS 2 and Autoware: Enable localhost-only communication.

    Also make sure the following returns an empty line:

    echo $ROS_LOCALHOST_ONLY\n
    "},{"location":"tutorials/","title":"Tutorials","text":""},{"location":"tutorials/#tutorials","title":"Tutorials","text":""},{"location":"tutorials/#simulation-tutorials","title":"Simulation tutorials","text":"

    Simulations provide a way of verifying Autoware's functionality before field testing with an actual vehicle. There are three main types of simulation that can be run ad hoc or via a scenario runner.

    "},{"location":"tutorials/#simulation-methods","title":"Simulation methods","text":""},{"location":"tutorials/#ad-hoc-simulation","title":"Ad hoc simulation","text":"

    Ad hoc simulation is a flexible method for running basic simulations on your local machine, and is the recommended method for anyone new to Autoware.

    "},{"location":"tutorials/#scenario-simulation","title":"Scenario simulation","text":"

    Scenario simulation uses a scenario runner to run more complex simulations based on predefined scenarios. It is often run automatically for continuous integration purposes, but can also be run on a local machine.

    "},{"location":"tutorials/#simulation-types","title":"Simulation types","text":""},{"location":"tutorials/#planning-simulation","title":"Planning simulation","text":"

    Planning simulation uses simple dummy data to test the Planning and Control components - specifically path generation, path following and obstacle avoidance. It verifies that a vehicle can reach a goal destination while avoiding pedestrians and surrounding cars, and is another method for verifying the validity of Lanelet2 maps. It also allows for testing of traffic light handling.

    "},{"location":"tutorials/#how-does-planning-simulation-work","title":"How does planning simulation work?","text":"
    1. Generate a path to the goal destination
    2. Control the car along the generated path
    3. Detect and avoid any humans or other vehicles on the way to the goal destination
    "},{"location":"tutorials/#rosbag-replay-simulation","title":"Rosbag replay simulation","text":"

    Rosbag replay simulation uses prerecorded rosbag data to test the following aspects of the Localization and Perception components:

    • Localization: Estimation of the vehicle's location on the map by matching sensor and vehicle feedback data to the map.
    • Perception: Using sensor data to detect, track and predict dynamic objects such as surrounding cars, pedestrians, and other objects

    By repeatedly playing back the data, this simulation type can also be used for endurance testing.

    "},{"location":"tutorials/#digital-twin-simulation","title":"Digital twin simulation","text":"

    Digital twin simulation is a simulation type that is able to produce realistic data and simulate almost the entire system. It is also commonly referred to as end-to-end simulation.

    "},{"location":"tutorials/#evaluation-tutorials","title":"Evaluation Tutorials","text":""},{"location":"tutorials/#components-evaluation","title":"Components Evaluation","text":"

    Components evaluation tutorials provide a way to evaluate the performance of Autoware's components in a controlled environment.

    "},{"location":"tutorials/#localization-evaluation","title":"Localization Evaluation","text":"

    The Localization Evaluation tutorial provides a way to evaluate the performance of the Localization component.

    "},{"location":"tutorials/#urban-environment-evaluation","title":"Urban Environment Evaluation","text":"

    The Urban Environment Evaluation tutorial provides a way to evaluate the performance of the Localization component in urban environment. It uses the Istanbul Open Dataset for testing. Test data, map, test steps are included. Test results are also shared.

    "},{"location":"tutorials/ad-hoc-simulation/","title":"Ad hoc simulation","text":""},{"location":"tutorials/ad-hoc-simulation/#ad-hoc-simulation","title":"Ad hoc simulation","text":"

    Warning

    Under Construction

    "},{"location":"tutorials/ad-hoc-simulation/planning-simulation/","title":"Planning simulation","text":""},{"location":"tutorials/ad-hoc-simulation/planning-simulation/#planning-simulation","title":"Planning simulation","text":""},{"location":"tutorials/ad-hoc-simulation/planning-simulation/#preparation","title":"Preparation","text":"

    Download and unpack a sample map.

    • You can also download the map manually.
    gdown -O ~/autoware_map/ 'https://docs.google.com/uc?export=download&id=1499_nsbUbIeturZaDj7jhUownh5fvXHd'\nunzip -d ~/autoware_map ~/autoware_map/sample-map-planning.zip\n

    Note

    Sample map: Copyright 2020 TIER IV, Inc.

    Check if you have ~/autoware_data folder and files in it.

    $ cd ~/autoware_data\n$ ls -C -w 30\nimage_projection_based_fusion\nlidar_apollo_instance_segmentation\nlidar_centerpoint\ntensorrt_yolo\ntensorrt_yolox\ntraffic_light_classifier\ntraffic_light_fine_detector\ntraffic_light_ssd_fine_detector\nyabloc_pose_initializer\n

    If not, please, follow Manual downloading of artifacts.

    Change the maximum velocity, that is 15km/h by default.

    "},{"location":"tutorials/ad-hoc-simulation/planning-simulation/#basic-simulations","title":"Basic simulations","text":"

    Using Autoware Launch GUI

    If you prefer a graphical user interface (GUI) over the command line for launching and managing your simulations, refer to the Using Autoware Launch GUI section at the end of this document for a step-by-step guide.

    "},{"location":"tutorials/ad-hoc-simulation/planning-simulation/#lane-driving-scenario","title":"Lane driving scenario","text":""},{"location":"tutorials/ad-hoc-simulation/planning-simulation/#1-launch-autoware","title":"1. Launch Autoware","text":"
    source ~/autoware/install/setup.bash\nros2 launch autoware_launch planning_simulator.launch.xml map_path:=$HOME/autoware_map/sample-map-planning vehicle_model:=sample_vehicle sensor_model:=sample_sensor_kit\n

    Warning

    Note that you cannot use ~ instead of $HOME here.

    If ~ is used, the map will fail to load.

    "},{"location":"tutorials/ad-hoc-simulation/planning-simulation/#2-set-an-initial-pose-for-the-ego-vehicle","title":"2. Set an initial pose for the ego vehicle","text":"

    a) Click the 2D Pose estimate button in the toolbar, or hit the P key.

    b) In the 3D View pane, click and hold the left-mouse button, and then drag to set the direction for the initial pose. An image representing the vehicle should now be displayed.

    Warning

    Remember to set the initial pose of the car in the same direction as the lane.

    To confirm the direction of the lane, check the arrowheads displayed on the map.

    "},{"location":"tutorials/ad-hoc-simulation/planning-simulation/#3-set-a-goal-pose-for-the-ego-vehicle","title":"3. Set a goal pose for the ego vehicle","text":"

    a) Click the 2D Goal Pose button in the toolbar, or hit the G key.

    b) In the 3D View pane, click and hold the left-mouse button, and then drag to set the direction for the goal pose. If done correctly, you will see a planned path from initial pose to goal pose.

    "},{"location":"tutorials/ad-hoc-simulation/planning-simulation/#4-start-the-ego-vehicle","title":"4. Start the ego vehicle","text":"

    Now you can start the ego vehicle driving by clicking the AUTO button on OperationMode in AutowareStatePanel. Alteratively, you can manually start the vehicle by running the following command:

    source ~/autoware/install/setup.bash\nros2 service call /api/operation_mode/change_to_autonomous autoware_adapi_v1_msgs/srv/ChangeOperationMode {}\n

    After that, you can see AUTONOMOUS sign on OperationMode and AUTO button is grayed out.

    "},{"location":"tutorials/ad-hoc-simulation/planning-simulation/#parking-scenario","title":"Parking scenario","text":"
    1. Set an initial pose and a goal pose, and engage the ego vehicle.

    2. When the vehicle approaches the goal, it will switch from lane driving mode to parking mode.

    3. After that, the vehicle will reverse into the destination parking spot.

    "},{"location":"tutorials/ad-hoc-simulation/planning-simulation/#pull-out-and-pull-over-scenario","title":"Pull out and pull over scenario","text":"
    1. In a pull out scenario, set the ego vehicle at the road shoulder.

    2. Set a goal and then engage the ego vehicle.

    3. In a pull over scenario, similarly set the ego vehicle in a lane and set a goal on the road shoulder.

    "},{"location":"tutorials/ad-hoc-simulation/planning-simulation/#lane-change-scenario","title":"Lane change scenario","text":"
    1. Download and unpack Nishishinjuku map.

      gdown -O ~/autoware_map/ 'https://github.com/tier4/AWSIM/releases/download/v1.1.0/nishishinjuku_autoware_map.zip'\nunzip -d ~/autoware_map ~/autoware_map/nishishinjuku_autoware_map.zip\n
    2. Launch autoware with Nishishinjuku map with following command:

      source ~/autoware/install/setup.bash\nros2 launch autoware_launch planning_simulator.launch.xml map_path:=$HOME/autoware_map/nishishinjuku_autoware_map vehicle_model:=sample_vehicle sensor_model:=sample_sensor_kit\n

    3. Set an initial pose and a goal pose in adjacent lanes.

    4. Engage the ego vehicle. It will make a lane change along the planned path.

    "},{"location":"tutorials/ad-hoc-simulation/planning-simulation/#avoidance-scenario","title":"Avoidance scenario","text":"
    1. Set an initial pose and a goal pose in the same lane. A path will be planned.

    2. Set a \"2D Dummy Bus\" on the roadside. A new path will be planned.

    3. Engage the ego vehicle. It will avoid the obstacle along the newly planned path.

    "},{"location":"tutorials/ad-hoc-simulation/planning-simulation/#advanced-simulations","title":"Advanced Simulations","text":""},{"location":"tutorials/ad-hoc-simulation/planning-simulation/#placing-dummy-objects","title":"Placing dummy objects","text":"
    1. Click the 2D Dummy Car or 2D Dummy Pedestrian button in the toolbar.
    2. Set the pose of the dummy object by clicking and dragging on the map.
    3. Set the velocity of the object in Tool Properties -> 2D Dummy Car/Pedestrian panel.

      !!! note

      Changes to the velocity parameter will only affect objects placed after the parameter is changed.

    4. Delete any dummy objects placed in the view by clicking the Delete All Objects button in the toolbar.

    5. Click the Interactive button in the toolbar to make the dummy object interactive.

    6. For adding an interactive dummy object, press SHIFT and click the right click.

    7. For deleting an interactive dummy object, press ALT and click the right click.
    8. For moving an interactive dummy object, hold the right click drag and drop the object.

    "},{"location":"tutorials/ad-hoc-simulation/planning-simulation/#traffic-light-recognition-simulation","title":"Traffic light recognition simulation","text":"

    By default, traffic lights on the map are all treated as if they are set to green. As a result, when a path is created that passed through an intersection with a traffic light, the ego vehicle will drive through the intersection without stopping.

    The following steps explain how to set and reset traffic lights in order to test how the Planning component will respond.

    "},{"location":"tutorials/ad-hoc-simulation/planning-simulation/#set-traffic-light","title":"Set traffic light","text":"
    1. Go to Panels -> Add new panel, select TrafficLightPublishPanel, and then press OK.

    2. In TrafficLightPublishPanel, set the ID and color of the traffic light.

    3. Click the SET button.

    4. Finally, click the PUBLISH button to send the traffic light status to the simulator. Any planned path that goes past the selected traffic light will then change accordingly.

    By default, Rviz should display the ID of each traffic light on the map. You can have a closer look at the IDs by zooming in the region or by changing the View type.

    In case the IDs are not displayed, try the following troubleshooting steps:

    a) In the Displays panel, find the traffic_light_id topic by toggling the triangle icons next to Map > Lanelet2VectorMap > Namespaces.

    b) Check the traffic_light_id checkbox.

    c) Reload the topic by clicking the Map checkbox twice.

    "},{"location":"tutorials/ad-hoc-simulation/planning-simulation/#updatereset-traffic-light","title":"Update/Reset traffic light","text":"

    You can update the color of the traffic light by selecting the next color (in the image it is GREEN) and clicking SET button. In the image the traffic light in front of the ego vehicle changed from RED to GREEN and the vehicle restarted.

    To remove a traffic light from TrafficLightPublishPanel, click the RESET button.

    Reference video tutorials

    "},{"location":"tutorials/ad-hoc-simulation/planning-simulation/#using-autoware-launch-gui","title":"Using Autoware Launch GUI","text":"

    This section provides a step-by-step guide on using the Autoware Launch GUI for planning simulations, offering an alternative to the command-line instructions provided in the Basic simulations section.

    "},{"location":"tutorials/ad-hoc-simulation/planning-simulation/#getting-started-with-autoware-launch-gui","title":"Getting Started with Autoware Launch GUI","text":"
    1. Installation: Ensure you have installed the Autoware Launch GUI. Installation instructions.

    2. Launching the GUI: Open the Autoware Launch GUI from your applications menu.

    "},{"location":"tutorials/ad-hoc-simulation/planning-simulation/#launching-a-planning-simulation","title":"Launching a Planning Simulation","text":""},{"location":"tutorials/ad-hoc-simulation/planning-simulation/#lane-driving-scenario_1","title":"Lane Driving Scenario","text":"
    1. Set Autoware Path: In the GUI, set the path to your Autoware installation.

    2. Select Launch File: Choose planning_simulator.launch.xml for the lane driving scenario.

    3. Customize Parameters: Adjust parameters such as map_path, vehicle_model, and sensor_model as needed.

    4. Start Simulation: Click the launch button to start the simulation.

    5. Any Scenario: From here, you can follow the instructions in the

    • Lane driving scenario: Lane Driving Scenario
    • Parking scenario: Parking scenario
    • Lane change scenario: Lane change scenario
    • Avoidance scenario: Avoidance scenario
    • Advanced Simulations: Advanced Simulations
    "},{"location":"tutorials/ad-hoc-simulation/planning-simulation/#monitoring-and-managing-the-simulation","title":"Monitoring and Managing the Simulation","text":"
    • Real-Time Monitoring: Use the GUI to monitor CPU/Memory usage and Autoware logs in real-time.
    • Profile Management: Save your simulation profiles for quick access in future simulations.
    • Adjusting Parameters: Easily modify simulation parameters on-the-fly through the GUI.
    "},{"location":"tutorials/ad-hoc-simulation/planning-simulation/#want-to-try-autoware-with-your-custom-map","title":"Want to Try Autoware with Your Custom Map?","text":"

    The above content describes the process for conducting some operations in the planning simulator using a sample map. If you are interested in running Autoware with maps of your own environment, please visit the How to Create Vector Map section for guidance.

    "},{"location":"tutorials/ad-hoc-simulation/rosbag-replay-simulation/","title":"Rosbag replay simulation","text":""},{"location":"tutorials/ad-hoc-simulation/rosbag-replay-simulation/#rosbag-replay-simulation","title":"Rosbag replay simulation","text":""},{"location":"tutorials/ad-hoc-simulation/rosbag-replay-simulation/#steps","title":"Steps","text":"
    1. Download and unpack a sample map.

      • You can also download the map manually.
      gdown -O ~/autoware_map/ 'https://docs.google.com/uc?export=download&id=1A-8BvYRX3DhSzkAnOcGWFw5T30xTlwZI'\nunzip -d ~/autoware_map/ ~/autoware_map/sample-map-rosbag.zip\n
    2. Download the sample rosbag files.

      • You can also download the rosbag files manually.
      gdown -O ~/autoware_map/ 'https://docs.google.com/uc?export=download&id=1sU5wbxlXAfHIksuHjP3PyI2UVED8lZkP'\nunzip -d ~/autoware_map/ ~/autoware_map/sample-rosbag.zip\n
    3. Check if you have ~/autoware_data folder and files in it.

      $ cd ~/autoware_data\n$ ls -C -w 30\nimage_projection_based_fusion\nlidar_apollo_instance_segmentation\nlidar_centerpoint\ntensorrt_yolo\ntensorrt_yolox\ntraffic_light_classifier\ntraffic_light_fine_detector\ntraffic_light_ssd_fine_detector\nyabloc_pose_initializer\n

      If not, please, follow Manual downloading of artifacts.

    "},{"location":"tutorials/ad-hoc-simulation/rosbag-replay-simulation/#note","title":"Note","text":"
    • Sample map and rosbag: Copyright 2020 TIER IV, Inc.
    • Due to privacy concerns, the rosbag does not contain image data, which will cause:
      • Traffic light recognition functionality cannot be tested with this sample rosbag.
      • Object detection accuracy is decreased.
    "},{"location":"tutorials/ad-hoc-simulation/rosbag-replay-simulation/#how-to-run-a-rosbag-replay-simulation","title":"How to run a rosbag replay simulation","text":"

    Using Autoware Launch GUI

    If you prefer a graphical user interface (GUI) over the command line for launching and managing your simulations, refer to the Using Autoware Launch GUI section at the end of this document for a step-by-step guide.

    1. Launch Autoware.

      source ~/autoware/install/setup.bash\nros2 launch autoware_launch logging_simulator.launch.xml map_path:=$HOME/autoware_map/sample-map-rosbag vehicle_model:=sample_vehicle sensor_model:=sample_sensor_kit\n

      Note that you cannot use ~ instead of $HOME here.

      \u26a0\ufe0f You might encounter error and warning messages in the terminal before playing the rosbag. This is normal behavior. These should cease once the rosbag is played and proper initialization takes place

    2. Play the sample rosbag file.

      source ~/autoware/install/setup.bash\nros2 bag play ~/autoware_map/sample-rosbag/ -r 0.2 -s sqlite3\n

      \u26a0\ufe0f Due to the discrepancy between the timestamp in the rosbag and the current system timestamp, Autoware may generate warning messages in the terminal alerting to this mismatch. This is normal behavior.

    3. To focus the view on the ego vehicle, change the Target Frame in the RViz Views panel from viewer to base_link.

    4. To switch the view to Third Person Follower etc, change the Type in the RViz Views panel.

    Reference video tutorials

    "},{"location":"tutorials/ad-hoc-simulation/rosbag-replay-simulation/#using-autoware-launch-gui","title":"Using Autoware Launch GUI","text":"

    This section provides a step-by-step guide for using the Autoware Launch GUI to launch and manage your rosbag replay simulation. offering an alternative to the command-line instructions provided in the previous section.

    "},{"location":"tutorials/ad-hoc-simulation/rosbag-replay-simulation/#getting-started-with-autoware-launch-gui","title":"Getting Started with Autoware Launch GUI","text":"
    1. Installation: Ensure you have installed the Autoware Launch GUI. Installation instructions.

    2. Launching the GUI: Open the Autoware Launch GUI from your applications menu.

    "},{"location":"tutorials/ad-hoc-simulation/rosbag-replay-simulation/#launching-a-logging-simulation","title":"Launching a Logging Simulation","text":"
    1. Set Autoware Path: In the GUI, set the path to your Autoware installation.
    2. Select Launch File: Choose logging_simulator.launch.xml for the lane driving scenario.
    3. Customize Parameters: Adjust parameters such as map_path, vehicle_model, and sensor_model as needed.

    4. Start Simulation: Click the launch button to start the simulation and have access to all the logs.

    5. Play Rosbag: Move to the Rosbag tab and select the rosbag file you wish to play.

    6. Adjust Playback Speed: Adjust the playback speed as needed and any other parameters you wish to customize.

    7. Start Playback: Click the play button to start the rosbag playback and have access to settings such as pause/play, stop, and speed slider5.

    8. View Simulation: Move to the RViz window to view the simulation.

    9. To focus the view on the ego vehicle, change the Target Frame in the RViz Views panel from viewer to base_link.

    10. To switch the view to Third Person Follower etc, change the Type in the RViz Views panel.

    "},{"location":"tutorials/ad-hoc-simulation/digital-twin-simulation/MORAI_Sim-tutorial/","title":"MORAI Sim: Drive","text":""},{"location":"tutorials/ad-hoc-simulation/digital-twin-simulation/MORAI_Sim-tutorial/#morai-sim-drive","title":"MORAI Sim: Drive","text":"

    Note

    Any kind of for-profit activity with the trial version of the MORAI SIM:Drive is strictly prohibited.

    "},{"location":"tutorials/ad-hoc-simulation/digital-twin-simulation/MORAI_Sim-tutorial/#hardware-requirements","title":"Hardware requirements","text":"Minimum PC Specs OS Windows 10, Ubuntu 20.04, Ubuntu 18.04, Ubuntu 16.04 CPU Intel i5-9600KF or AMD Ryzen 5 3500X RAM DDR4 16GB GPU RTX2060 Super Required PC Specs OS Windows 10, Ubuntu 20.04, Ubuntu 18.04, Ubuntu 16.04 CPU Intel i9-9900K or AMD Ryzen 7 3700X (or higher) RAM DDR4 64GB (or higher) GPU RTX2080Ti or higher"},{"location":"tutorials/ad-hoc-simulation/digital-twin-simulation/MORAI_Sim-tutorial/#application-and-download","title":"Application and Download","text":"

    Only for AWF developers, trial license for 3 months can be issued. Download the application form and send to Hyeongseok Jeon

    After the trial license is issued, you can login to MORAI Sim:Drive via Launchers (Windows/Ubuntu)

    CAUTION: Do not use the Launchers in the following manual

    "},{"location":"tutorials/ad-hoc-simulation/digital-twin-simulation/MORAI_Sim-tutorial/#technical-documents","title":"Technical Documents","text":"

    as Oct. 2022, our simulation version is ver.22.R3 but the english manual is under construction.

    Be aware that the following manuals are for ver.22.R2

    • MORAI Sim:Drive Manual
    • ITRI BUS Odd tutorial
    • Tutorial for rosbag replay with Tacoma Airport
    "},{"location":"tutorials/ad-hoc-simulation/digital-twin-simulation/MORAI_Sim-tutorial/#technical-support","title":"Technical Support","text":"

    Hyeongseok Jeon will give full technical support

    • hsjeon@morai.ai
    • Hyeongseok Jeon#2355 in Discord
    "},{"location":"tutorials/ad-hoc-simulation/digital-twin-simulation/awsim-tutorial/","title":"AWSIM simulator","text":""},{"location":"tutorials/ad-hoc-simulation/digital-twin-simulation/awsim-tutorial/#awsim-simulator","title":"AWSIM simulator","text":"

    AWSIM is a simulator for Autoware development and testing, initially developed by TIER IV and still actively maintained.

    AWSIM Labs is a fork of AWSIM, developed under the Autoware Foundation, providing additional features and lighter resource usage.

    "},{"location":"tutorials/ad-hoc-simulation/digital-twin-simulation/awsim-tutorial/#feature-differences-from-the-awsim-and-awsim-labs","title":"Feature differences from the AWSIM and AWSIM Labs","text":""},{"location":"tutorials/ad-hoc-simulation/digital-twin-simulation/awsim-tutorial/#simulator-features","title":"Simulator Features","text":"Simulator Features AWSIM 1.2.3 AWSIM Labs 1.4.2 Rendering Pipeline HDRP URP Resource usage Heavy \ud83d\udc22 Light \ud83d\udc07 Can toggle vehicle keyboard control from GUI \u2705 \u2705 Radar sensor support \u2705 \u2705 Lidar sensor multiple returns support \u2705 \u274c Raw radar output \u2705 \u274c Lidar snow energy loss feature \u2705 \u274c Physically based vehicle dynamics simulation (VPP integration) \u274c \u2705 Can reset vehicle position on runtime \u274c \u2705 Select maps and vehicles at startup \u274c \u2705 Scenario simulator integrated into the same binary \u274c \u2705 Multi-lidars are enabled by default \u274c \u2705 Set vehicle pose and spawn objects from RViz2 \u274c \u2705 Visualize multiple cameras and move them dynamically \u274c \u2705 Turn sensors on/off during runtime \u274c \u2705 Graphics quality settings (Low/Medium/Ultra) \u274c \u2705 Bird\u2019s eye view camera option \u274c \u2705 Works with the latest Autoware main branch \u274c \u2705"},{"location":"tutorials/ad-hoc-simulation/digital-twin-simulation/awsim-tutorial/#development-features","title":"Development Features","text":"Development Features AWSIM AWSIM Labs Unity Version Unity 2021.1.7f1 Unity LTS 2022.3.36f1 CI for build \u2705 \u274c (disabled temporarily) Various regression unit tests \u2705 \u274c CI for documentation generation within PR \u274c \u2705 Main branch is protected with linear history \u274c \u2705 Pre-commit for code formatting \u274c \u2705 Documentation page shows PR branches before merge \u274c \u2705"},{"location":"tutorials/ad-hoc-simulation/digital-twin-simulation/awsim-tutorial/#awsim-labs","title":"AWSIM Labs","text":"

    AWSIM Labs supports Unity LTS 2022.3.36f1 and uses the Universal Render Pipeline (URP), optimized for lighter resource usage. It introduces several enhancements such as the ability to reset vehicle positions at runtime, support for multiple scenes and vehicle setups on runtime, and multi-lidars enabled by default.

    To get started with AWSIM Labs, please follow the instructions.

    "},{"location":"tutorials/ad-hoc-simulation/digital-twin-simulation/awsim-tutorial/#awsim","title":"AWSIM","text":"

    AWSIM runs on Unity 2021.1.7f1 using the High Definition Render Pipeline (HDRP), which requires more system resources.

    To get started with AWSIM, please follow the instructions provided by TIER IV.

    "},{"location":"tutorials/ad-hoc-simulation/digital-twin-simulation/carla-tutorial/","title":"CARLA simulator","text":""},{"location":"tutorials/ad-hoc-simulation/digital-twin-simulation/carla-tutorial/#carla-simulator","title":"CARLA simulator","text":"

    CARLA is a famous open-source simulator for the autonomous driving research. Now there is no official support to Autoware.universe, but some projects from communities support it. The document is to list these projects for anyone who wants to run Autoware with Carla. You can report issues to each project if there is any problem.

    "},{"location":"tutorials/ad-hoc-simulation/digital-twin-simulation/carla-tutorial/#project-lists-in-alphabetical-order","title":"Project lists (In alphabetical order)","text":""},{"location":"tutorials/ad-hoc-simulation/digital-twin-simulation/carla-tutorial/#autoware_carla_interface","title":"autoware_carla_interface","text":"

    Autoware ROS package to enables communication between Autoware and CARLA simulator for autonomous driving simulation. It is integrated in autoware.universe and actively maintained to stay compatible with the latest Autoware updates.

    • Package Link and Tutorial: autoware_carla_interface.
    "},{"location":"tutorials/ad-hoc-simulation/digital-twin-simulation/carla-tutorial/#carla_autoware_bridge","title":"carla_autoware_bridge","text":"

    An addition package to carla_ros_bridge to connect CARLA simulator to Autoware Universe software.

    • Project Link: carla_autoware_bridge
    • Tutorial: https://github.com/Robotics010/carla_autoware_bridge/blob/master/getting-started.md
    "},{"location":"tutorials/ad-hoc-simulation/digital-twin-simulation/carla-tutorial/#open_planner","title":"open_planner","text":"

    Integrated open source planner and related tools for autonomous navigation of autonomous vehicle and mobile robots

    • Project Link: open_planner
    • Tutorial: https://github.com/ZATiTech/open_planner/blob/humble/op_carla_bridge/README.md
    "},{"location":"tutorials/ad-hoc-simulation/digital-twin-simulation/carla-tutorial/#zenoh_carla_bridge","title":"zenoh_carla_bridge","text":"

    The project is mainly for controlling multiple vehicles in Carla. It uses Zenoh to bridge the Autoware and Carla and is able to distinguish different messages for different vehicles. Feel free to ask questions and report issues to autoware_carla_launch.

    • Project Link:
      • autoware_carla_launch: The integrated environment to run the bridge and Autoware easily.
      • zenoh_carla_bridge: The bridge implementation.
    • Tutorial:
      • The documentation of autoware_carla_launch: The official documentation, including installation and several usage scenarios.
      • Running Multiple Autoware-Powered Vehicles in Carla using Zenoh: The introduction on Autoware Tech Blog.
    "},{"location":"tutorials/components_evaluation/","title":"Localization Evaluation","text":""},{"location":"tutorials/components_evaluation/#localization-evaluation","title":"Localization Evaluation","text":"

    Warning

    Under Construction

    "},{"location":"tutorials/components_evaluation/localization_evaluation/urban-environment-evaluation/","title":"Urban environment evaluation","text":""},{"location":"tutorials/components_evaluation/localization_evaluation/urban-environment-evaluation/#introduction","title":"Introduction","text":""},{"location":"tutorials/components_evaluation/localization_evaluation/urban-environment-evaluation/#related-links","title":"Related Links","text":"

    Data Collection Documentation --> https://autowarefoundation.github.io/autoware-documentation/main/datasets/#istanbul-open-dataset

    "},{"location":"tutorials/components_evaluation/localization_evaluation/urban-environment-evaluation/#purpose","title":"Purpose","text":"

    The purpose of this test is to see the performance of the current NDT based default autoware localization system in an urban environment and evaluate its results. Our expectation is to see the deficiencies of the localization and understand which situations we need to improve.

    "},{"location":"tutorials/components_evaluation/localization_evaluation/urban-environment-evaluation/#test-environment","title":"Test Environment","text":"

    The test data was collected in Istanbul. It also includes some scenarios that may be challenging for localization, such as tunnels and bridges. You can find the entire route marked in the image below.

    "},{"location":"tutorials/components_evaluation/localization_evaluation/urban-environment-evaluation/#test-dataset-map","title":"Test Dataset & Map","text":"

    Test and Mapping datasets are same and the dataset contains data from the portable mapping kit used for general mapping purposes.

    The data contains data from the following sensors:

    • 1 x Applanix POS LVX GNSS/INS System
    • 1 x Hesai Pandar XT32 LiDAR

    You can find the data collected for testing and mapping in this Documentation.

    NOTE ! Since there was no velocity source coming from the vehicle during all these tests, the twist message coming from GNSS/INS was given to ekf_localizer as the linear&angular velocity source. In order to understand whether this increases the error in cases where the GNSS/INS error increases in the tunnel and how it affects the system, localization in the tunnel was tested by giving only the pose from the NDT, without giving this velocity to ekf_localizer. The video of this test is here. As seen in the video, when velocity is not given, localization in the tunnel deteriorates more quickly. It is also predicted that if the IMU Twist message combined (/localization/twist_estimator/twist_with_covariance) with the linear velocity from the vehicle is given instead of the GNSS/INS Twist message, the performance in the tunnel will increase. However, this test cannot be done with the current data.

    "},{"location":"tutorials/components_evaluation/localization_evaluation/urban-environment-evaluation/#expected-tests","title":"Expected Tests","text":"

    1.) Firstly, it is aimed to detect the points where the localization is completely broken and to roughly control the localization.

    2.) By extracting the metrics, it is aimed to see concretely how much the localization error has increased and what the level of performance is.

    "},{"location":"tutorials/components_evaluation/localization_evaluation/urban-environment-evaluation/#how-to-reproduce-tests","title":"How to Reproduce Tests","text":""},{"location":"tutorials/components_evaluation/localization_evaluation/urban-environment-evaluation/#test-with-raw-data","title":"Test With Raw Data","text":"

    If you test with raw data, you need to follow these Test With Raw Data instructions. Since you need to repeat all preprocessing operations on the input data in this test, you need to test by switching to the test branches in the sensor kit and individual params repos. You can perform these steps by following the instructions below.

    "},{"location":"tutorials/components_evaluation/localization_evaluation/urban-environment-evaluation/#installation","title":"Installation","text":"

    1.) Download and unpack a test map files.

    • You can also download the map manually.
    mkdir ~/autoware_ista_map\ngdown --id 1WPWmFCjV7eQee4kyBpmGNlX7awerCPxc -O ~/autoware_ista_map/\n

    NOTE ! You also need to add lanelet2_map.osm file to autoware_ista_map folder. Since no lanelet file is created for this map at the moment, you can run any lanelet2_map.osm file by placing it in this folder.

    2.) Download the test rosbag files.

    • You can also download the rosbag file manually.
    mkdir ~/autoware_ista_data\ngdown --id 1uta5Xr_ftV4jERxPNVqooDvWerK0dn89 -O ~/autoware_ista_data/\n
    "},{"location":"tutorials/components_evaluation/localization_evaluation/urban-environment-evaluation/#prepare-autoware-to-test","title":"Prepare Autoware to Test","text":"

    1.) Checkout autoware_launch:

    cd ~/autoware/src/launcher/autoware_launch/\ngit remote add autoware_launch https://github.com/meliketanrikulu/autoware_launch.git\ngit remote update\ngit checkout evaluate_localization_issue_7652\n

    2.) Checkout individual_params:

    cd ~/autoware/src/param/autoware_individual_params/\ngit remote add autoware_individual_params https://github.com/meliketanrikulu/autoware_individual_params.git\ngit remote update\ngit checkout evaluate_localization_issue_7652\n

    3.) Checkout sample_sensor_kit_launch:

    cd ~/autoware/src/sensor_kit/sample_sensor_kit_launch/\ngit remote add sample_sensor_kit_launch https://github.com/meliketanrikulu/sample_sensor_kit_launch.git\ngit remote update\ngit checkout evaluate_localization_issue_7652\n

    4.) Compile updated packages:

    cd ~/autoware\ncolcon build --symlink-install --packages-select sample_sensor_kit_launch autoware_individual_params autoware_launch common_sensor_launch\n
    "},{"location":"tutorials/components_evaluation/localization_evaluation/urban-environment-evaluation/#launch-autoware","title":"Launch Autoware","text":"
    source ~/autoware/install/setup.bash\nros2 launch autoware_launch logging_simulator.launch.xml map_path:=~/autoware_ista_map/ vehicle_model:=sample_vehicle sensor_model:=sample_sensor_kit\n
    "},{"location":"tutorials/components_evaluation/localization_evaluation/urban-environment-evaluation/#run-rosbag","title":"Run Rosbag","text":"
    source ~/autoware/install/setup.bash\nros2 bag play ~/autoware_ista_data/rosbag2_2024_09_11-17_53_54_0.db3\n
    "},{"location":"tutorials/components_evaluation/localization_evaluation/urban-environment-evaluation/#localization-test-only","title":"Localization Test Only","text":"

    If you only want to see the localization performance, follow the Localization Test Only instructions. For those who only want to perform localization tests, a second test bag file and a separate launch file have been created for this test. You can perform this test by following the instructions below.

    "},{"location":"tutorials/components_evaluation/localization_evaluation/urban-environment-evaluation/#installation_1","title":"Installation","text":"

    1.) Download and unpack a test map files.

    • You can also download the map manually.
    mkdir ~/autoware_ista_map\ngdown --id 1WPWmFCjV7eQee4kyBpmGNlX7awerCPxc -O ~/autoware_ista_map/\n

    NOTE ! You also need to add lanelet2_map.osm file to autoware_ista_map folder. Since no lanelet file is created for this map at the moment, you can run any lanelet2_map.osm file by placing it in this folder.

    2.) Download the test rosbag files.

    • You can also download the localization rosbag file manually.
    mkdir ~/autoware_ista_data\ngdown --id 1yEB5j74gPLLbkkf87cuCxUgHXTkgSZbn -O ~/autoware_ista_data/\n
    "},{"location":"tutorials/components_evaluation/localization_evaluation/urban-environment-evaluation/#prepare-autoware-to-test_1","title":"Prepare Autoware to Test","text":"

    1.) Checkout autoware_launch:

    cd ~/autoware/src/launcher/autoware_launch/\ngit remote add autoware_launch https://github.com/meliketanrikulu/autoware_launch.git\ngit remote update\ngit checkout evaluate_localization_issue_7652\n

    2.) Compile updated packages:

    cd ~/autoware\ncolcon build --symlink-install --packages-select autoware_launch\n
    "},{"location":"tutorials/components_evaluation/localization_evaluation/urban-environment-evaluation/#launch-autoware_1","title":"Launch Autoware","text":"
    source ~/autoware/install/setup.bash\nros2 launch autoware_launch urban_environment_localization_test.launch.xml map_path:=~/autoware_ista_map/\n
    "},{"location":"tutorials/components_evaluation/localization_evaluation/urban-environment-evaluation/#run-rosbag_1","title":"Run Rosbag","text":"
    source ~/autoware/install/setup.bash\nros2 bag play ~/autoware_ista_data/rosbag2_2024_09_12-14_59_58_0.db3\n
    "},{"location":"tutorials/components_evaluation/localization_evaluation/urban-environment-evaluation/#test-results","title":"Test Results","text":""},{"location":"tutorials/components_evaluation/localization_evaluation/urban-environment-evaluation/#test-1-simple-test-of-localization","title":"Test 1: Simple Test of Localization","text":"

    Simply put, how localization works in an urban environment was tested and the results were recorded on video. Here is the test video.

    We can see from video that there is a localization error in the longitudinal axis along the Eurasia tunnel. As expected, NDT based localization does not work properly here. However, since the NDT score cannot detect the distortion here, localization is not completely broken until the end of the tunnel. Localization is completely broken at the exit of the tunnel. I re-initialized the localization after vehicle exit the tunnel.

    From this point on, we move on to the bridge scenario. This is one of the bridges connecting the Bosphorus and was the longest bridge on our route. Here too, we thought that NDT-based localization might be disrupted. However, I did not observe any disruption.

    After this part there is another tunnel(Kagithane - Bomonti) and localization behaves similar to Eurasia tunnel. However, at the exit of this tunnel, it recovers localization on its own without the need for re-initialization.

    "},{"location":"tutorials/components_evaluation/localization_evaluation/urban-environment-evaluation/#test-1-summary","title":"Test 1 Summary","text":"

    In summary, the places where there was visible deterioration with our test route were the tunnels. This was already an expected situation. Apart from this, we anticipated that we could have problems on the bridges, but I could not observe any deterioration in the localization on the bridges. Of course, it is useful to remind you at this point that these tests were conducted with a single data.

    "},{"location":"tutorials/components_evaluation/localization_evaluation/urban-environment-evaluation/#test-2-comparing-results-with-ground-truth","title":"Test 2: Comparing Results with Ground Truth","text":"

    1.) How NDT score changed ?

    Along the route, only in a small section of the longest tunnel, the Eurasia Tunnel, did the NDT score remain below the expected values. However, there is a visible deterioration in localization in both tunnels on the route. As a result of this test, we saw that we cannot test the problems with the NDT score. You can see how the NDT score changes along the route in the image below. While looking at the visual, let's remember that the NDT score threshold is accepted as 2.3.

    NDT Score (Nearest Voxel Transformation Likelihood) Threshold = 2.3

    2.) Compare with Ground Truth

    Ground Truth : In these tests, the post-processed GNSS / INS data was used as ground truth. Since the error of this ground truth data also decreases in the tunnel environment, it is necessary to evaluate these regions by taking into account the Ground Truth error.

    During these tests, I compared the NDT and EKF exposures with Ground Truth and presented the results. I am sharing the test results below as png. However, if you want to examine this data in more detail, I have created an executable file for you to visualize and take a closer look at. You can access this executable file from here. Currently there is only a version that works on Ubuntu at this link, but I plan to add it for Windows as well. You need to follow these steps:

    cd /your/path/show_evaluation_ubuntu\n./pose_main\n

    You can also update configurations with changing /configs/evaluation_pose.yaml

    2d Trajectory :

    2D Error :

    3D Trajectory:

    3D Error:

    Lateral Error:

    Longitudinal Error:

    Roll-Pitch-Yaw:

    Roll-Pitch-Yaw Error:

    X-Y-Z:

    X-Y-Z Error:

    "},{"location":"tutorials/scenario-simulation/","title":"Scenario simulation","text":""},{"location":"tutorials/scenario-simulation/#scenario-simulation","title":"Scenario simulation","text":"

    Warning

    Under Construction

    "},{"location":"tutorials/scenario-simulation/planning-simulation/installation/","title":"Installation","text":""},{"location":"tutorials/scenario-simulation/planning-simulation/installation/#installation","title":"Installation","text":"

    This document contains step-by-step instruction on how to build AWF Autoware Core/Universe with scenario_simulator_v2.

    "},{"location":"tutorials/scenario-simulation/planning-simulation/installation/#prerequisites","title":"Prerequisites","text":"
    1. Autoware has been built and installed
    "},{"location":"tutorials/scenario-simulation/planning-simulation/installation/#how-to-build","title":"How to build","text":"
    1. Navigate to the Autoware workspace:

      cd autoware\n
    2. Import Simulator dependencies:

      vcs import src < simulator.repos\n
    3. Install dependent ROS packages:

      source /opt/ros/humble/setup.bash\nrosdep install -y --from-paths src --ignore-src --rosdistro $ROS_DISTRO\n
    4. Build the workspace:

      colcon build --symlink-install --cmake-args -DCMAKE_BUILD_TYPE=Release\n
    "},{"location":"tutorials/scenario-simulation/planning-simulation/random-test-simulation/","title":"Random test simulation","text":""},{"location":"tutorials/scenario-simulation/planning-simulation/random-test-simulation/#random-test-simulation","title":"Random test simulation","text":"

    Note

    Running the Scenario Simulator requires some additional steps on top of building and installing Autoware, so make sure that Scenario Simulator installation has been completed first before proceeding.

    "},{"location":"tutorials/scenario-simulation/planning-simulation/random-test-simulation/#running-steps","title":"Running steps","text":"
    1. Move to the workspace directory where Autoware and the Scenario Simulator have been built.

    2. Source the workspace setup script:

      source install/setup.bash\n
    3. Run the simulation:

      ros2 launch random_test_runner random_test.launch.py \\\narchitecture_type:=awf/universe/20240605 \\\nsensor_model:=sample_sensor_kit \\\nvehicle_model:=sample_vehicle\n

    For more information about supported parameters, refer to the random_test_runner documentation.

    "},{"location":"tutorials/scenario-simulation/planning-simulation/scenario-test-simulation/","title":"Scenario test simulation","text":""},{"location":"tutorials/scenario-simulation/planning-simulation/scenario-test-simulation/#scenario-test-simulation","title":"Scenario test simulation","text":"

    Note

    Running the Scenario Simulator requires some additional steps on top of building and installing Autoware, so make sure that Scenario Simulator installation has been completed first before proceeding.

    "},{"location":"tutorials/scenario-simulation/planning-simulation/scenario-test-simulation/#running-steps","title":"Running steps","text":"
    1. Move to the workspace directory where Autoware and the Scenario Simulator have been built.

    2. Source the workspace setup script:

      source install/setup.bash\n
    3. Run the simulation:

      ros2 launch scenario_test_runner scenario_test_runner.launch.py \\\narchitecture_type:=awf/universe/20240605 \\\nrecord:=false \\\nscenario:='$(find-pkg-share scenario_test_runner)/scenario/sample.yaml' \\\nsensor_model:=sample_sensor_kit \\\nvehicle_model:=sample_vehicle\n

    Reference video tutorials

    "},{"location":"tutorials/scenario-simulation/rosbag-replay-simulation/driving-log-replayer/","title":"Driving Log Replayer","text":""},{"location":"tutorials/scenario-simulation/rosbag-replay-simulation/driving-log-replayer/#driving-log-replayer","title":"Driving Log Replayer","text":"

    Driving Log Replayer is an evaluation tool for Autoware. To get started, follow the official instruction provided by TIER IV.

    "}]} \ No newline at end of file +{"config":{"lang":["en"],"separator":"[\\s\\-]+","pipeline":["stopWordFilter"]},"docs":[{"location":"","title":"Introduction","text":""},{"location":"#autoware-documentation","title":"Autoware Documentation","text":"

    Welcome to the Autoware Documentation! Here, you'll find comprehensive information about Autoware, the forefront of open-source autonomous driving software.

    "},{"location":"#about-autoware","title":"About Autoware","text":"

    Autoware is the world\u2019s leading open-source project dedicated to autonomous driving technology. Built on the Robot Operating System (ROS 2), Autoware facilitates the commercial deployment of autonomous vehicles across various platforms and applications. Discover more about our innovative project at the Autoware Foundation website.

    "},{"location":"#getting-started","title":"Getting started","text":""},{"location":"#step-into-the-world-of-autoware","title":"Step into the World of Autoware","text":"

    Discover the Concept: Start your Autoware journey by exploring the Concept pages. Here, you'll delve into the core ideas behind Autoware, understand its unique features and how it stands out in the field of autonomous driving technology. Learn about the principles guiding its development and the innovative solutions it offers.

    "},{"location":"#set-up-your-environment","title":"Set Up Your Environment","text":"

    Install Autoware: Once you have a grasp of the Autoware philosophy, it's time to bring it into your world. Visit our Installation section for detailed, step-by-step instructions to seamlessly install Autoware and related tools in your environment. Whether you're a beginner or an expert, these pages are designed to guide you through a smooth setup process.

    "},{"location":"#dive-deeper","title":"Dive Deeper","text":"

    Hands-on Experience: After installation, head over to the Tutorials pages. These tutorials are designed to provide you with hands-on experience with some simulations, allowing you to apply what you've learned and gain practical skills. From basic operations to advanced applications, these tutorials cover a wide range of topics to enhance your understanding of Autoware.

    Advanced Topics: As you become more familiar with Autoware, the How-to Guides will be your next destination. These pages delve into more advanced topics, offering deeper insights and more complex scenarios. They're perfect for those looking to expand their knowledge and expertise in specific areas of Autoware. Additionally, this section also covers methods for integrating Autoware with real vehicles, providing practical guidance for those seeking to apply Autoware in real-world settings.

    "},{"location":"#understand-the-design","title":"Understand the Design","text":"

    Design Concepts: For those interested in the architecture and design philosophy of Autoware, the Design pages are a treasure trove. Learn about the architectural decisions, design patterns, and the rationale behind the development of Autoware's various components.

    "},{"location":"#more-information","title":"More Information","text":"

    Datasets: The Datasets pages are a crucial resource for anyone working with Autoware. Here, you'll find a collection of datasets that are compatible with Autoware, providing real-world scenarios and data to test and refine your autonomous driving solutions.

    Support: As you dive deeper into Autoware, you might encounter challenges or have questions. The Support section is designed to assist you in these moments. This area includes FAQs, community forums, and contact information for getting help with Autoware. It's also a great place to connect with other Autoware users and contributors, sharing experiences and solutions.

    Competitions: The Competitions pages are where excitement meets innovation. Autoware regularly hosts or participates in various competitions and challenges, providing a platform for users to test their skills, showcase their work, and contribute to the community. These competitions range from local to global scale, offering opportunities for both beginners and experts to engage and excel. Stay updated with upcoming events and take part in the advancement of autonomous driving technologies.

    "},{"location":"#contribute-and-collaborate","title":"Contribute and Collaborate","text":"

    Become a Part of the Autoware Story: Your journey with Autoware isn't just about learning and using; it's also about contributing and shaping the future of autonomous driving. The Contributing section is more than just a guide; it's an invitation to be part of something bigger. Whether you're a coder, a writer, or an enthusiast, your contributions can propel Autoware forward. Join us, and let's build the future together.

    "},{"location":"#related-documentations","title":"Related Documentations","text":"

    In addition to this page, there are several related documentations to further your knowledge and understanding of Autoware:

    • Autoware Universe Documentation contains technical documentations of each component/function such as localization, planning, etc.
    • Autoware Tools Documentation contains technical documentations of each tools for autonomous driving such as performance analysis, calibration, etc.
    "},{"location":"autoware-competitions/","title":"Autoware Competitions","text":""},{"location":"autoware-competitions/#autoware-competitions","title":"Autoware Competitions","text":"

    This page is a collection of the links to the competitions that are related to the Autoware Foundation.

    Title Status Description Ongoing Autoware / TIER IV Challenge 2023 Date: May 15, 2023 - Nov. 1st, 2023 As one of the main contributors of Autoware, TIER IV has been facing many difficult challenges through development, and TIER IV would like to sponsor a challenge to solve such engineering challenges. Any researchers, students, individuals or organizations are welcome to participate and submit their solution to any of the challenges we propose. Finished Japan Automotive AI Challenge 2023 (Integration) Registration: June 5, 2023 - July 14, 2023 Qualifiers: July 3, 2023 - Aug. 31, 2023 Finals: Nov. 12, 2023 In this competition, we focus on challenging tasks posed by autonomous driving in factory environments and aim to develop Autoware-based AD software that can overcome them. The qualifiers use the digital twin autonomous driving simulator AWSIM to complete specific tasks within a virtual environment. Teams that make it to the finals have the opportunity to run their software on actual vehicles in a test course in Japan. Ongoing Japan Automotive AI Challenge 2023 (Simulation) Registration: Nov 6, 2023 - Dec 28, 2023 Date: Dec 4, 2023 - Jan. 31, 2024 This contest is a software development contest with a motorsports theme. Participants will develop autonomous driving software based on Autoware.Universe, and integrate it into a racing car that runs in the End to End simulation space (AWSIM). The goal is simple, win the race while driving safely!"},{"location":"autoware-competitions/#proposing-new-competition","title":"Proposing New Competition","text":"

    If you want add a new competition to this page, please propose it in a TSC meeting and get confirmation from the AWF.

    "},{"location":"contributing/","title":"Contributing","text":""},{"location":"contributing/#contributing","title":"Contributing","text":"

    Thank you for your interest in contributing! Autoware is supported by people like you, and all types and sizes of contribution are welcome.

    As a contributor, here are the guidelines that we would like you to follow for Autoware and its associated repositories.

    • Code of Conduct
    • What should I know before I get started?
      • Autoware concepts
      • Contributing to open source projects
    • How can I get help?
    • How can I contribute?
      • Participate in discussions
      • Join a working group
      • Report bugs
      • Make a pull request

    Like Autoware itself, these guidelines are being actively developed and suggestions for improvement are always welcome! Guideline changes can be proposed by creating a discussion in the Ideas category.

    "},{"location":"contributing/#code-of-conduct","title":"Code of Conduct","text":"

    To ensure the Autoware community stays open and inclusive, please follow the Code of Conduct.

    If you believe that someone in the community has violated the Code of Conduct, please make a report by emailing conduct@autoware.org.

    "},{"location":"contributing/#what-should-i-know-before-i-get-started","title":"What should I know before I get started?","text":""},{"location":"contributing/#autoware-concepts","title":"Autoware concepts","text":"

    To gain a high-level understanding of Autoware's architecture and design, the following pages provide a brief overview:

    • Autoware architecture
    • Autoware concepts

    For experienced developers, the Autoware interfaces and individual component pages should also be reviewed to understand the inputs and outputs for each component or module at a more detailed level.

    "},{"location":"contributing/#contributing-to-open-source-projects","title":"Contributing to open source projects","text":"

    If you are new to open source projects, we recommend reading GitHub's How to Contribute to Open Source guide for an overview of why people contribute to open source projects, what it means to contribute and much more besides.

    "},{"location":"contributing/#how-can-i-get-help","title":"How can I get help?","text":"

    Do not open issues for general support questions as we want to keep GitHub issues for confirmed bug reports. Instead, open a discussion in the Q&A category. For more details on the support mechanisms for Autoware, refer to the Support guidelines.

    Note

    Issues created for questions or unconfirmed bugs will be moved to GitHub discussions by the maintainers.

    "},{"location":"contributing/#how-can-i-contribute","title":"How can I contribute?","text":""},{"location":"contributing/#discussions","title":"Discussions","text":"

    You can contribute to Autoware by facilitating and participating in discussions, such as:

    • Proposing a new feature to enhance Autoware
    • Joining an existing discussion and expressing your opinion
    • Organizing discussions for other contributors
    • Answering questions and supporting other contributors
    "},{"location":"contributing/#working-groups","title":"Working groups","text":"

    The various working groups within the Autoware Foundation are responsible for accomplishing goals set by the Technical Steering Committee. These working groups are open to everyone, and joining a particular working group will allow you to gain an understanding of current projects, see how those projects are managed within each group and to contribute to issues that will help progress a particular project.

    To see the schedule for upcoming working group meetings, refer to the Autoware Foundation events calendar.

    "},{"location":"contributing/#bug-reports","title":"Bug reports","text":"

    Before you report a bug, please search the issue tracker for the appropriate repository. It is possible that someone has already reported the same issue and that workarounds exist. If you can't determine the appropriate repository, ask the maintainers for help by creating a new discussion in the Q&A category.

    When reporting a bug, you should provide a minimal set of instructions to reproduce the issue. Doing so allows us to quickly confirm and focus on the right problem.

    If you want to fix the bug by yourself that will be appreciated, but you should discuss possible approaches with the maintainers in the issue before submitting a pull request.

    Creating an issue is straightforward, but if you happen to experience any problems then create a Q&A discussion to ask for help.

    "},{"location":"contributing/#pull-requests","title":"Pull requests","text":"

    You can submit pull requests for small changes such as:

    • Minor documentation updates
    • Fixing spelling mistakes
    • Fixing CI failures
    • Fixing warnings detected by compilers or analysis tools
    • Making small changes to a single package

    If your pull request is a large change, the following process should be followed:

    1. Create a GitHub Discussion to propose the change. Doing so allows you to get feedback from other members and the Autoware maintainers and to ensure that the proposed change is in line with Autoware's design philosophy and current development plans. If you're not sure where to have that conversation, then create a new Q&A discussion.

    2. Create an issue following consensus in the discussions

    3. Create a pull request to implement the changes that references the Issue created in step 2

    4. Create documentation for the new addition (if relevant)

    Examples of large changes include:

    • Adding a new feature to Autoware
    • Adding a new documentation page or section

    For more information on how to submit a good pull request, have a read of the pull request guidelines and don't forget to review the required license notations!

    "},{"location":"contributing/license/","title":"License","text":""},{"location":"contributing/license/#license","title":"License","text":"

    Autoware is licensed under Apache License 2.0. Thus all contributions will be licensed as such as per clause 5 of the Apache License 2.0:

    5. Submission of Contributions. Unless You explicitly state otherwise,\n   any Contribution intentionally submitted for inclusion in the Work\n   by You to the Licensor shall be under the terms and conditions of\n   this License, without any additional terms or conditions.\n   Notwithstanding the above, nothing herein shall supersede or modify\n   the terms of any separate license agreement you may have executed\n   with Licensor regarding such Contributions.\n

    Here is an example copyright header to add to the top of a new file:

    Copyright [first year of contribution] The Autoware Contributors\nSPDX-License-Identifier: Apache-2.0\n

    We don't write copyright notations of each contributor here. Instead, we place them in the NOTICE file like the following.

    This product includes code developed by [company name].\nCopyright [first year of contribution] [company name]\n

    Let us know if your legal department has a special request for the copyright notation.

    Currently, the old formats explained here are also acceptable. Those old formats can be replaced by this new format if the original authors agree. Note that we won't write their copyrights to the NOTICE file unless they agree with the new format.

    References:

    • https://opensource.google/docs/copyright/#the-year
    • https://www.linuxfoundation.org/blog/copyright-notices-in-open-source-software-projects/
    • https://www.apache.org/licenses/LICENSE-2.0
    • https://www.apache.org/legal/src-headers.html
    • https://www.apache.org/foundation/license-faq.html
    • https://infra.apache.org/licensing-howto.html
    "},{"location":"contributing/coding-guidelines/","title":"Coding guidelines","text":""},{"location":"contributing/coding-guidelines/#coding-guidelines","title":"Coding guidelines","text":"

    Warning

    Under Construction

    "},{"location":"contributing/coding-guidelines/#common-guidelines","title":"Common guidelines","text":"

    Refer to the following links for now:

    • https://docs.ros.org/en/humble/Contributing/Developer-Guide.html

    Also, keep in mind the following concepts.

    • Keep things consistent.
    • Automate where possible, using simple checks for formatting, syntax, etc.
    • Write comments and documentation in English.
    • Functions that are too complex (low cohesion) should be appropriately split into smaller functions (e.g. less than 40 lines in one function is recommended in the google style guideline).
    • Try to minimize the use of member variables or global variables that have a large scope.
    • Whenever possible, break large pull requests into smaller, manageable PRs (less than 200 lines of change is recommended in some research e.g. here).
    • When it comes to code reviews, don't spend too much time on trivial disagreements. For details see:
      • https://en.wikipedia.org/wiki/Law_of_triviality
      • https://steemit.com/programming/@emrebeyler/code-reviews-and-parkinson-s-law-of-triviality
    • Please follow the guidelines for each language.
      • C++
      • Python
      • Shell script
    "},{"location":"contributing/coding-guidelines/#autoware-style-guide","title":"Autoware Style Guide","text":"

    For Autoware-specific styles, refer to the following:

    • Use the autoware_ prefix for package names.
      • cf. Directory structure guideline, Package name
      • cf. Prefix packages with autoware_
    • Add implementations within the autoware namespace.
      • cf. Class design guideline, Namespaces
      • cf. Prefix packages with autoware_, Option 3:
    • The header files to be exported must be placed in the PACKAGE_NAME/include/autoware/ directory.
      • cf. Directory structure guideline, Exporting headers
    • In CMakeLists.txt, use autoware_package().
      • cf. autoware_cmake README
    "},{"location":"contributing/coding-guidelines/languages/cmake/","title":"CMake","text":""},{"location":"contributing/coding-guidelines/languages/cmake/#cmake","title":"CMake","text":"

    Warning

    Under Construction

    Refer to the following links for now:

    • https://docs.ros.org/en/humble/Contributing/Code-Style-Language-Versions.html#cmake
    "},{"location":"contributing/coding-guidelines/languages/cmake/#use-the-autoware_package-macro","title":"Use the autoware_package macro","text":"

    To reduce duplications in CMakeLists.txt, there is the autoware_package() macro. See the README and use it in your package.

    "},{"location":"contributing/coding-guidelines/languages/cpp/","title":"C++","text":""},{"location":"contributing/coding-guidelines/languages/cpp/#c","title":"C++","text":"

    Warning

    Under Construction

    "},{"location":"contributing/coding-guidelines/languages/cpp/#references","title":"References","text":"

    Follow the guidelines below if a rule is not defined on this page.

    1. https://docs.ros.org/en/humble/Contributing/Code-Style-Language-Versions.html
    2. https://www.autosar.org/fileadmin/standards/adaptive/22-11/AUTOSAR_RS_CPP14Guidelines.pdf
    3. https://isocpp.github.io/CppCoreGuidelines/CppCoreGuidelines

    Also, it is encouraged to apply Clang-Tidy to each file. For the usage, see Applying Clang-Tidy to ROS packages.

    Note that not all rules are covered by Clang-Tidy.

    "},{"location":"contributing/coding-guidelines/languages/cpp/#style-rules","title":"Style rules","text":""},{"location":"contributing/coding-guidelines/languages/cpp/#include-header-files-in-the-defined-order-required-partially-automated","title":"Include header files in the defined order (required, partially automated)","text":""},{"location":"contributing/coding-guidelines/languages/cpp/#rationale","title":"Rationale","text":"
    • Due to indirect dependencies, the include system of C++ makes different behaviors if the header order is different.
    • To reduce unintended bugs, local header files should come first.
    "},{"location":"contributing/coding-guidelines/languages/cpp/#reference","title":"Reference","text":"
    • https://llvm.org/docs/CodingStandards.html#include-style
    "},{"location":"contributing/coding-guidelines/languages/cpp/#example","title":"Example","text":"

    Include the headers in the following order:

    • Main module header
    • Local package headers
    • Other package headers
    • Message headers
    • Boost headers
    • C system headers
    • C++ system headers
    // Compliant\n#include \"my_header.hpp\"\n\n#include \"my_package/foo.hpp\"\n\n#include <package1/foo.hpp>\n#include <package2/bar.hpp>\n\n#include <std_msgs/msg/header.hpp>\n\n#include <iostream>\n#include <vector>\n

    If you use \"\" and <> properly, ClangFormat in pre-commit sorts headers automatically.

    Do not define macros between #include lines because it prevents automatic sorting.

    // Non-compliant\n#include <package1/foo.hpp>\n#include <package2/bar.hpp>\n\n#define EIGEN_MPL2_ONLY\n#include \"my_header.hpp\"\n#include \"my_package/foo.hpp\"\n\n#include <Eigen/Core>\n\n#include <std_msgs/msg/header.hpp>\n\n#include <iostream>\n#include <vector>\n

    Instead, define macros before #include lines.

    // Compliant\n#define EIGEN_MPL2_ONLY\n\n#include \"my_header.hpp\"\n\n#include \"my_package/foo.hpp\"\n\n#include <Eigen/Core>\n#include <package1/foo.hpp>\n#include <package2/bar.hpp>\n\n#include <std_msgs/msg/header.hpp>\n\n#include <iostream>\n#include <vector>\n

    If there are any reasons for defining macros at a specific position, write a comment before the macro.

    // Compliant\n#include \"my_header.hpp\"\n\n#include \"my_package/foo.hpp\"\n\n#include <package1/foo.hpp>\n#include <package2/bar.hpp>\n\n#include <std_msgs/msg/header.hpp>\n\n#include <iostream>\n#include <vector>\n\n// For the foo bar reason, the FOO_MACRO must be defined here.\n#define FOO_MACRO\n#include <foo/bar.hpp>\n
    "},{"location":"contributing/coding-guidelines/languages/cpp/#use-lower-snake-case-for-function-names-required-partially-automated","title":"Use lower snake case for function names (required, partially automated)","text":""},{"location":"contributing/coding-guidelines/languages/cpp/#rationale_1","title":"Rationale","text":"
    • It is consistent with the C++ standard library.
    • It is consistent with other programming languages such as Python and Rust.
    "},{"location":"contributing/coding-guidelines/languages/cpp/#exception","title":"Exception","text":"
    • For member functions of classes inherited from external project classes such as Qt, follow that naming convention.
    "},{"location":"contributing/coding-guidelines/languages/cpp/#reference_1","title":"Reference","text":"
    • https://docs.ros.org/en/humble/The-ROS2-Project/Contributing/Code-Style-Language-Versions.html#function-and-method-naming
    "},{"location":"contributing/coding-guidelines/languages/cpp/#example_1","title":"Example","text":"
    void function_name()\n{\n}\n
    "},{"location":"contributing/coding-guidelines/languages/cpp/#use-upper-camel-case-for-enum-names-required-partially-automated","title":"Use upper camel case for enum names (required, partially automated)","text":""},{"location":"contributing/coding-guidelines/languages/cpp/#rationale_2","title":"Rationale","text":"
    • It is consistent with ROS 2 core packages.
    "},{"location":"contributing/coding-guidelines/languages/cpp/#exception_1","title":"Exception","text":"
    • Enums defined in the rosidl file can use other naming conventions.
    "},{"location":"contributing/coding-guidelines/languages/cpp/#reference_2","title":"Reference","text":"
    • http://wiki.ros.org/CppStyleGuide (Refer to \"15. Enumerations\")
    "},{"location":"contributing/coding-guidelines/languages/cpp/#example_2","title":"Example","text":"
    enum class Color\n{\nRed, Green, Blue\n}\n
    "},{"location":"contributing/coding-guidelines/languages/cpp/#use-lower-snake-case-for-constant-names-required-partially-automated","title":"Use lower snake case for constant names (required, partially automated)","text":""},{"location":"contributing/coding-guidelines/languages/cpp/#rationale_3","title":"Rationale","text":"
    • It is consistent with ROS 2 core packages.
    • It is consistent with std::numbers.
    "},{"location":"contributing/coding-guidelines/languages/cpp/#exception_2","title":"Exception","text":"
    • Constants defined in the rosidl file can use other naming conventions.
    "},{"location":"contributing/coding-guidelines/languages/cpp/#reference_3","title":"Reference","text":"
    • https://en.cppreference.com/w/cpp/numeric/constants
    "},{"location":"contributing/coding-guidelines/languages/cpp/#example_3","title":"Example","text":"
    constexpr double gravity = 9.80665;\n
    "},{"location":"contributing/coding-guidelines/languages/cpp/#count-acronyms-and-contractions-of-compound-words-as-one-word-required-partially-automated","title":"Count acronyms and contractions of compound words as one word (required, partially automated)","text":""},{"location":"contributing/coding-guidelines/languages/cpp/#rationale_4","title":"Rationale","text":"
    • To clarify the boundaries of words when acronyms are consecutive.
    "},{"location":"contributing/coding-guidelines/languages/cpp/#reference_4","title":"Reference","text":"
    • https://rust-lang.github.io/api-guidelines/naming.html#casing-conforms-to-rfc-430-c-case
    "},{"location":"contributing/coding-guidelines/languages/cpp/#example_4","title":"Example","text":"
    class RosApi;\nRosApi ros_api;\n
    "},{"location":"contributing/coding-guidelines/languages/cpp/#do-not-use-the-auto-keywords-with-eigens-expressions-required","title":"Do not use the auto keywords with Eigen's expressions (required)","text":""},{"location":"contributing/coding-guidelines/languages/cpp/#rationale_5","title":"Rationale","text":"
    • Avoid using auto with Eigen::Matrix or Eigen::Vector variables, as it can lead to bugs.
    "},{"location":"contributing/coding-guidelines/languages/cpp/#reference_5","title":"Reference","text":"
    • Eigen, C++11 and the auto keyword
    "},{"location":"contributing/coding-guidelines/languages/cpp/#use-rclcpp_-eg-rclcpp_info-macros-instead-of-printf-or-stdcout-for-logging-required","title":"Use RCLCPP_* (e.g. RCLCPP_INFO) macros instead of printf or std::cout for logging (required)","text":""},{"location":"contributing/coding-guidelines/languages/cpp/#rationale_6","title":"Rationale","text":"
    • Reasons include the following:
      • It allows for consistent log level management. For instance, with RCLCPP_* macros, you can simply set --log_level to adjust the log level uniformly across the application.
      • You can standardize the format using RCUTILS_CONSOLE_OUTPUT_FORMAT.
      • With RCLCPP_* macros, logs are automatically recorded to /rosout. These logs can be saved to a rosbag, which can then be replayed to review the log data.
    "},{"location":"contributing/coding-guidelines/languages/cpp/#reference_6","title":"Reference","text":"
    • Autoware Documentation for Console logging in ROS Node
    "},{"location":"contributing/coding-guidelines/languages/docker/","title":"Docker","text":""},{"location":"contributing/coding-guidelines/languages/docker/#docker","title":"Docker","text":"

    Warning

    Under Construction

    Refer to the following links for now:

    • https://github.com/hadolint/hadolint
    "},{"location":"contributing/coding-guidelines/languages/github-actions/","title":"GitHub Actions","text":""},{"location":"contributing/coding-guidelines/languages/github-actions/#github-actions","title":"GitHub Actions","text":"

    Warning

    Under Construction

    Refer to the following links for now:

    • https://docs.github.com/en/actions/guides
    "},{"location":"contributing/coding-guidelines/languages/markdown/","title":"Markdown","text":""},{"location":"contributing/coding-guidelines/languages/markdown/#markdown","title":"Markdown","text":"

    Warning

    Under Construction

    Refer to the following links for now:

    • https://docs.ros.org/en/foxy/Contributing/Code-Style-Language-Versions.html#markdown-restructured-text-docblocks
    • https://github.com/DavidAnson/markdownlint
    "},{"location":"contributing/coding-guidelines/languages/package-xml/","title":"package.xml","text":""},{"location":"contributing/coding-guidelines/languages/package-xml/#packagexml","title":"package.xml","text":"

    Warning

    Under Construction

    Refer to the following links for now:

    • https://ros.org/reps/rep-0149.html
    • https://github.com/tier4/pre-commit-hooks-ros#prettier-package-xml
    "},{"location":"contributing/coding-guidelines/languages/python/","title":"Python","text":""},{"location":"contributing/coding-guidelines/languages/python/#python","title":"Python","text":"

    Warning

    Under Construction

    Refer to the following links for now:

    • https://docs.ros.org/en/foxy/Contributing/Code-Style-Language-Versions.html#python
    • https://github.com/psf/black
    • https://github.com/PyCQA/isort
    "},{"location":"contributing/coding-guidelines/languages/shell-scripts/","title":"Shell scripts","text":""},{"location":"contributing/coding-guidelines/languages/shell-scripts/#shell-scripts","title":"Shell scripts","text":"

    Warning

    Under Construction

    Refer to the following links for now:

    • https://google.github.io/styleguide/shellguide.html
    • https://github.com/koalaman/shellcheck
    • https://github.com/mvdan/sh
    "},{"location":"contributing/coding-guidelines/ros-nodes/class-design/","title":"Class design","text":""},{"location":"contributing/coding-guidelines/ros-nodes/class-design/#class-design","title":"Class design","text":"

    We'll use the autoware_gnss_poser package as an example.

    "},{"location":"contributing/coding-guidelines/ros-nodes/class-design/#namespaces","title":"Namespaces","text":"
    namespace autoware::gnss_poser\n{\n...\n} // namespace autoware::gnss_poser\n
    • Everything should be under autoware::gnss_poser namespace.
    • Closing braces must contain a comment with the namespace name. (Automated with cpplint)
    "},{"location":"contributing/coding-guidelines/ros-nodes/class-design/#classes","title":"Classes","text":""},{"location":"contributing/coding-guidelines/ros-nodes/class-design/#nodes","title":"Nodes","text":""},{"location":"contributing/coding-guidelines/ros-nodes/class-design/#gnss_poser_nodehpp","title":"gnss_poser_node.hpp","text":"
    class GNSSPoser : public rclcpp::Node\n{\npublic:\nexplicit GNSSPoser(const rclcpp::NodeOptions & node_options);\n...\n}\n
    "},{"location":"contributing/coding-guidelines/ros-nodes/class-design/#gnss_poser_nodecpp","title":"gnss_poser_node.cpp","text":"
    GNSSPoser::GNSSPoser(const rclcpp::NodeOptions & node_options)\n: Node(\"gnss_poser\", node_options)\n{\n...\n}\n
    • The class name should be in CamelCase.
    • Node classes should inherit from rclcpp::Node.
    • The constructor must be marked be explicit.
    • The constructor must take rclcpp::NodeOptions as an argument.
    • Default node name:
      • should not have autoware_ prefix.
      • should NOT have _node suffix.
        • Rationale: Node names are useful in the runtime. And output of ros2 node list will show only nodes anyway. Having _node is redundant.
      • Example: gnss_poser.
    "},{"location":"contributing/coding-guidelines/ros-nodes/class-design/#component-registration","title":"Component registration","text":"
    ...\n} // namespace autoware::gnss_poser\n\n#include <rclcpp_components/register_node_macro.hpp>\nRCLCPP_COMPONENTS_REGISTER_NODE(autoware::gnss_poser::GNSSPoser)\n
    • The component should be registered at the end of the gnss_poser_node.cpp file, outside the namespaces.
    "},{"location":"contributing/coding-guidelines/ros-nodes/class-design/#libraries","title":"Libraries","text":"

    Warning

    Under Construction

    "},{"location":"contributing/coding-guidelines/ros-nodes/console-logging/","title":"Console logging","text":""},{"location":"contributing/coding-guidelines/ros-nodes/console-logging/#console-logging","title":"Console logging","text":"

    ROS 2 logging is a powerful tool for understanding and debugging ROS nodes.

    This page focuses on how to design console logging in Autoware and shows several practical examples. To comprehensively understand how ROS 2 logging works, refer to the logging documentation.

    "},{"location":"contributing/coding-guidelines/ros-nodes/console-logging/#logging-use-cases-in-autoware","title":"Logging use cases in Autoware","text":"
    • Developers debug code by seeing the console logs.
    • Vehicle operators take appropriate risk-avoiding actions depending on the console logs.
    • Log analysts analyze the console logs that are recorded in rosbag files.

    To efficiently support these use cases, clean and highly visible logs are required. For that, several rules are defined below.

    "},{"location":"contributing/coding-guidelines/ros-nodes/console-logging/#rules","title":"Rules","text":""},{"location":"contributing/coding-guidelines/ros-nodes/console-logging/#choose-appropriate-severity-levels-required-non-automated","title":"Choose appropriate severity levels (required, non-automated)","text":""},{"location":"contributing/coding-guidelines/ros-nodes/console-logging/#rationale","title":"Rationale","text":"

    It's confusing if severity levels are inappropriate as follows:

    • Useless messages are marked as FATAL.
    • Very important error messages are marked as INFO.
    "},{"location":"contributing/coding-guidelines/ros-nodes/console-logging/#example","title":"Example","text":"

    Use the following criteria as a reference:

    • DEBUG: Use this level to show debug information for developers. Note that logs with this level is hidden by default.
    • INFO: Use this level to notify events (cyclic notifications during initialization, state changes, service responses, etc.) to operators.
    • WARN: Use this level when a node can continue working correctly, but unintended behaviors might happen.
      • For example, \"path optimization failed but the previous data can be used\", \"the localization score is low\", etc.
    • ERROR: Use this level when a node can't continue working correctly, and unintended behaviors would happen.
      • For example, \"path optimization failed and the path is empty\", \"the vehicle will trigger an emergency stop\", etc.
    • FATAL: Use this level when the entire system can't continue working correctly, and the system must be stopped.
      • For example, \"the vehicle control ECU doesn't respond\", \"the system storage crashed\", etc.
    "},{"location":"contributing/coding-guidelines/ros-nodes/console-logging/#filter-out-unnecessary-logs-by-setting-logging-options-required-non-automated","title":"Filter out unnecessary logs by setting logging options (required, non-automated)","text":""},{"location":"contributing/coding-guidelines/ros-nodes/console-logging/#rationale_1","title":"Rationale","text":"

    Some third-party nodes such as drivers may not follow the Autoware's guidelines. If the logs are noisy, unnecessary logs should be filtered out.

    "},{"location":"contributing/coding-guidelines/ros-nodes/console-logging/#example_1","title":"Example","text":"

    Use the --log-level {level} option to change the minimum level of logs to be displayed:

    <launch>\n<!-- This outputs only FATAL level logs. -->\n<node pkg=\"demo_nodes_cpp\" exec=\"talker\" ros_args=\"--log-level fatal\" />\n</launch>\n

    If you want to disable only specific output targets, use the --disable-stdout-logs, --disable-rosout-logs, and/or --disable-external-lib-logs options:

    <launch>\n<!-- This outputs to rosout and disk. -->\n<node pkg=\"demo_nodes_cpp\" exec=\"talker\" ros_args=\"--disable-stdout-logs\" />\n</launch>\n
    <launch>\n<!-- This outputs to stdout. -->\n<node pkg=\"demo_nodes_cpp\" exec=\"talker\" ros_args=\"--disable-rosout-logs --disable-external-lib-logs\" />\n</launch>\n
    "},{"location":"contributing/coding-guidelines/ros-nodes/console-logging/#use-throttled-logging-when-the-log-is-unnecessarily-shown-repeatedly-required-non-automated","title":"Use throttled logging when the log is unnecessarily shown repeatedly (required, non-automated)","text":""},{"location":"contributing/coding-guidelines/ros-nodes/console-logging/#rationale_2","title":"Rationale","text":"

    If tons of logs are shown on the console, people miss important message.

    "},{"location":"contributing/coding-guidelines/ros-nodes/console-logging/#example_2","title":"Example","text":"

    While waiting for some messages, throttled logs are usually enough. In such cases, wait about 5 seconds as a reference value.

    // Compliant\nvoid FooNode::on_timer() {\nif (!current_pose_) {\nRCLCPP_ERROR_THROTTLE(get_logger(), *get_clock(), 5000, \"Waiting for current_pose_.\");\nreturn;\n}\n}\n\n// Non-compliant\nvoid FooNode::on_timer() {\nif (!current_pose_) {\nRCLCPP_ERROR(get_logger(), \"Waiting for current_pose_.\");\nreturn;\n}\n}\n
    "},{"location":"contributing/coding-guidelines/ros-nodes/console-logging/#exception","title":"Exception","text":"

    The following cases are acceptable even if it's not throttled.

    • The message is really worth displaying every time.
    • The message level is DEBUG.
    "},{"location":"contributing/coding-guidelines/ros-nodes/console-logging/#do-not-depend-on-rclcppnode-in-core-library-classes-but-depend-only-on-rclcpplogginghpp-advisory-non-automated","title":"Do not depend on rclcpp::Node in core library classes but depend only on rclcpp/logging.hpp (advisory, non-automated)","text":""},{"location":"contributing/coding-guidelines/ros-nodes/console-logging/#rationale_3","title":"Rationale","text":"

    Core library classes, which contain reusable algorithms, may also be used for non-ROS platforms. When porting libraries to other platforms, fewer dependencies are preferred.

    "},{"location":"contributing/coding-guidelines/ros-nodes/console-logging/#example_3","title":"Example","text":"
    // Compliant\n#include <rclcpp/logging.hpp>\n\nclass FooCore {\npublic:\nexplicit FooCore(const rclcpp::Logger & logger) : logger_(logger) {}\n\nvoid process() {\nRCLCPP_INFO(logger_, \"message\");\n}\n\nprivate:\nrclcpp::Logger logger_;\n};\n\n// Compliant\n// Note that logs aren't published to `/rosout` if the logger name is different from the node name.\n#include <rclcpp/logging.hpp>\n\nclass FooCore {\nvoid process() {\nRCLCPP_INFO(rclcpp::get_logger(\"foo_core_logger\"), \"message\");\n}\n};\n\n\n// Non-compliant\n#include <rclcpp/node.hpp>\n\nclass FooCore {\npublic:\nexplicit FooCore(const rclcpp::NodeOptions & node_options) : node_(\"foo_core_node\", node_options) {}\n\nvoid process() {\nRCLCPP_INFO(node_.get_logger(), \"message\");\n}\n\nprivate:\nrclcpp::Node node_;\n};\n
    "},{"location":"contributing/coding-guidelines/ros-nodes/console-logging/#tips","title":"Tips","text":""},{"location":"contributing/coding-guidelines/ros-nodes/console-logging/#use-rqt_console-to-filter-logs","title":"Use rqt_console to filter logs","text":"

    To filter logs, using rqt_console is useful:

    ros2 run rqt_console rqt_console\n

    For more details, refer to ROS 2 Documentation.

    "},{"location":"contributing/coding-guidelines/ros-nodes/console-logging/#useful-marco-expressions","title":"Useful marco expressions","text":"

    To debug program, sometimes you need to see which functions and lines of code are executed. In that case, you can use __FILE__, __LINE__ and __FUNCTION__ macro:

    void FooNode::on_timer() {\nRCLCPP_DEBUG(get_logger(), \"file: %s, line: %s, function: %s\" __FILE__, __LINE__, __FUNCTION__);\n}\n

    The example output is as follows:

    [DEBUG] [1671720414.395456931] [foo]: file: /path/to/file.cpp, line: 100, function: on_timer

    "},{"location":"contributing/coding-guidelines/ros-nodes/coordinate-system/","title":"Coordinate system","text":""},{"location":"contributing/coding-guidelines/ros-nodes/coordinate-system/#coordinate-system","title":"Coordinate system","text":""},{"location":"contributing/coding-guidelines/ros-nodes/coordinate-system/#overview","title":"Overview","text":"

    The commonly used coordinate systems include the world coordinate system, the vehicle coordinate system, and the sensor coordinate system.

    • The world coordinate system is a fixed coordinate system that defines the physical space in the environment where the vehicle is located.
    • The vehicle coordinate system is the vehicle's own coordinate system, which defines the vehicle's position and orientation in the world coordinate system.
    • The sensor coordinate system is the sensor's own coordinate system, which is used to define the sensor's position and orientation in the vehicle coordinate system.
    "},{"location":"contributing/coding-guidelines/ros-nodes/coordinate-system/#how-coordinates-are-used-in-autoware","title":"How coordinates are used in Autoware","text":"

    In Autoware, coordinate systems are typically used to represent the position and movement of vehicles and obstacles in space. Coordinate systems are commonly used for path planning, perception and control, can help the vehicle decide how to avoid obstacles and to plan a safe and efficient path of travel.

    1. Transformation of sensor data

      In Autoware, each sensor has a unique coordinate system and their data is expressed in terms of the coordinates. In order to correlate the independent data between different sensors, we need to find the position relationship between each sensor and the vehicle body. Once the installation position of the sensor on the vehicle body is determined, it will remain fixed during running, so the offline calibration method can be used to determine the precise position of each sensor relative to the vehicle body.

    2. ROS TF2

      The TF2 system maintains a tree of coordinate transformations to represent the relationships between different coordinate systems. Each coordinate system is given a unique name and they are connected by coordinate transformations. How to use TF2, refer to the TF2 tutorial.

    "},{"location":"contributing/coding-guidelines/ros-nodes/coordinate-system/#tf-tree","title":"TF tree","text":"

    In Autoware, a common coordinate system structure is shown below:

    graph TD\n    /earth --> /map\n    /map --> /base_link\n    /base_link --> /imu\n    /base_link --> /lidar\n    /base_link --> /gnss\n    /base_link --> /radar\n    /base_link --> /camera_link\n    /camera_link --> /camera_optical_link
    • earth: earth coordinate system describe the position of any point on the earth in terms of geodetic longitude, latitude, and altitude. In Autoware, the earth frame is only used in the GnssInsPositionStamped message.
    • map: map coordinate system is used to represent the location of points on a local map. Geographical coordinate system are mapped into plane rectangular coordinate system using UTM or MGRS. The map frame`s axes point to the East, North, Up directions as explained in Coordinate Axes Conventions.
    • base_link: vehicle coordinate system, the origin of the coordinate system is the center of the rear axle of the vehicle.
    • imu, lidar, gnss, radar: these are sensor frames, transfer to vehicle coordinate system through mounting relationship.
    • camera_link: camera_link is ROS standard camera coordinate system .
    • camera_optical_link: camera_optical_link is image standard camera coordinate system.
    "},{"location":"contributing/coding-guidelines/ros-nodes/coordinate-system/#estimating-the-base_link-frame-by-using-the-other-sensors","title":"Estimating the base_link frame by using the other sensors","text":"

    Generally we don't have the localization sensors physically at the base_link frame. So various sensors localize with respect to their own frames, let's call it sensor frame.

    We introduce a new frame naming convention: x_by_y:

    x: estimated frame name\ny: localization method/source\n

    We cannot directly get the sensor frame. Because we would need the EKF module to estimate the base_link frame first.

    Without the EKF module the best we can do is to estimate Map[map] --> sensor_by_sensor --> base_link_by_sensor using this sensor.

    "},{"location":"contributing/coding-guidelines/ros-nodes/coordinate-system/#example-by-the-gnssins-sensor","title":"Example by the GNSS/INS sensor","text":"

    For the integrated GNSS/INS we use the following frames:

    flowchart LR\n    earth --> Map[map] --> gnss_ins_by_gnss_ins --> base_link_by_gnss_ins

    The gnss_ins_by_gnss_ins frame is obtained by the coordinates from GNSS/INS sensor. The coordinates are converted to map frame using the gnss_poser node.

    Finally gnss_ins_by_gnss_ins frame represents the position of the gnss_ins estimated by the gnss_ins sensor in the map.

    Then by using the static transformation between gnss_ins and the base_link frame, we can obtain the base_link_by_gnss_ins frame. Which represents the base_link estimated by the gnss_ins sensor.

    References:

    • https://www.ros.org/reps/rep-0105.html#earth
    "},{"location":"contributing/coding-guidelines/ros-nodes/coordinate-system/#coordinate-axes-conventions","title":"Coordinate Axes Conventions","text":"

    We are using East, North, Up (ENU) coordinate axes convention by default throughout the stack.

    X+: East\nY+: North\nZ+: Up\n

    The position, orientation, velocity, acceleration are all defined in the same axis convention.

    Position by the GNSS/INS sensor is expected to be in earth frame.

    Orientation, velocity, acceleration by the GNSS/INS sensor are expected to be in the sensor frame. Axes parallel to the map frame.

    If roll, pitch, yaw is provided, they correspond to rotation around X, Y, Z axes respectively.

    Rotation around:\nX+: roll\nY+: pitch\nZ+: yaw\n

    References:

    • https://www.ros.org/reps/rep-0103.html#axis-orientation
    "},{"location":"contributing/coding-guidelines/ros-nodes/coordinate-system/#how-they-can-be-created","title":"How they can be created","text":"
    1. Calibration of sensor

      The conversion relationship between every sensor coordinate system and base_link can be obtained through sensor calibration technology. Please consult the following link calibrating your sensors for instructions on how to calibrate your sensors.

    2. Localization

      The relationship between the base_link coordinate system and the map coordinate system is determined by the position and orientation of the vehicle, and can be obtained from the vehicle localization result.

    3. Geo-referencing of map data

      The geo-referencing information can get the transformation relationship of earth coordinate system to local map coordinate system.

    "},{"location":"contributing/coding-guidelines/ros-nodes/directory-structure/","title":"Directory structure","text":""},{"location":"contributing/coding-guidelines/ros-nodes/directory-structure/#directory-structure","title":"Directory structure","text":"

    This document describes the directory structure of ROS nodes within Autoware.

    We'll use the package autoware_gnss_poser as an example.

    Note that this example does not reflect the actual autoware_gnss_poser and includes extra files and directories to demonstrate all possible package structures.

    "},{"location":"contributing/coding-guidelines/ros-nodes/directory-structure/#c-package","title":"C++ package","text":""},{"location":"contributing/coding-guidelines/ros-nodes/directory-structure/#entire-structure","title":"Entire structure","text":"
    • This is a reference on how the entire package might be structured.
    • A package may not have all the directories shown here.
    autoware_gnss_poser\n\u251c\u2500 package.xml\n\u251c\u2500 CMakeLists.txt\n\u251c\u2500 README.md\n\u2502\n\u251c\u2500 config\n\u2502   \u251c\u2500 gnss_poser.param.yaml\n\u2502   \u2514\u2500 another_non_ros_config.yaml\n\u2502\n\u251c\u2500 schema\n\u2502   \u2514\u2500 gnss_poser.schema.json\n\u2502\n\u251c\u2500 doc\n\u2502   \u251c\u2500 foo_document.md\n\u2502   \u2514\u2500 foo_diagram.svg\n\u2502\n\u251c\u2500 include  # for exporting headers\n\u2502   \u2514\u2500 autoware\n\u2502       \u2514\u2500 gnss_poser\n\u2502           \u2514\u2500 exported_header.hpp\n\u2502\n\u251c\u2500 src\n\u2502   \u251c\u2500 include\n\u2502   \u2502   \u251c\u2500 gnss_poser_node.hpp\n\u2502   \u2502   \u2514\u2500 foo.hpp\n\u2502   \u251c\u2500 gnss_poser_node.cpp\n\u2502   \u2514\u2500 bar.cpp\n\u2502\n\u251c\u2500 launch\n\u2502   \u251c\u2500 gnss_poser.launch.xml\n\u2502   \u2514\u2500 gnss_poser.launch.py\n\u2502\n\u2514\u2500 test\n    \u251c\u2500 test_foo.hpp  # or place under an `include` folder here\n    \u2514\u2500 test_foo.cpp\n
    "},{"location":"contributing/coding-guidelines/ros-nodes/directory-structure/#package-name","title":"Package name","text":"
    • All the packages in Autoware should be prefixed with autoware_.
    • Even if the package is exports a node, the package name should NOT have the _node suffix.
    • The package name should be in snake_case.
    Package Name OK Alternative path_smoother \u274c autoware_path_smoother autoware_trajectory_follower_node \u274c autoware_trajectory_follower autoware_geography_utils \u2705 -"},{"location":"contributing/coding-guidelines/ros-nodes/directory-structure/#package-folder","title":"Package folder","text":"
    autoware_gnss_poser\n\u251c\u2500 package.xml\n\u251c\u2500 CMakeLists.txt\n\u2514\u2500 README.md\n

    The package folder name should be the same as the package name.

    "},{"location":"contributing/coding-guidelines/ros-nodes/directory-structure/#packagexml","title":"package.xml","text":"
    • The package name should be entered within the <name> tag.
      • <name>autoware_gnss_poser</name>
    "},{"location":"contributing/coding-guidelines/ros-nodes/directory-structure/#cmakeliststxt","title":"CMakeLists.txt","text":"
    • The project() command should call the package name.
      • Example: project(autoware_gnss_poser)
    "},{"location":"contributing/coding-guidelines/ros-nodes/directory-structure/#exporting-a-composable-node-component-executables","title":"Exporting a composable node component executables","text":"

    For best practices and system efficiency, it is recommended to primarily use composable node components.

    This method facilitates easier deployment and maintenance within ROS environments.

    ament_auto_add_library(${PROJECT_NAME} SHARED\nsrc/gnss_poser_node.cpp\n)\n\nrclcpp_components_register_node(${PROJECT_NAME}\nPLUGIN \"autoware::gnss_poser::GNSSPoser\"\nEXECUTABLE ${PROJECT_NAME}_node\n)\n
    • If you are building:
      • only a single composable node component, the executable name should start with ${PROJECT_NAME}
      • multiple composable node components, the executable name is up to the developer.
    • All composable node component executables should have the _node suffix.
    "},{"location":"contributing/coding-guidelines/ros-nodes/directory-structure/#exporting-a-standalone-node-executable-without-composition-discouraged-for-most-cases","title":"Exporting a standalone node executable without composition (discouraged for most cases)","text":"

    Use of standalone executables should be limited to cases where specific needs such as debugging or tooling are required.

    Exporting a composable node component executables is generally preferred for standard operational use due its flexibility and scalability within the ROS ecosystem.

    Assuming:

    • src/gnss_poser.cpp has the GNSSPoser class.
    • src/gnss_poser_node.cpp has the main function.
    • There is no composable node component registration.
    ament_auto_add_library(${PROJECT_NAME} SHARED\nsrc/gnss_poser.cpp\n)\n\nament_auto_add_executable(${PROJECT_NAME}_node src/gnss_poser_node.cpp)\n
    • The node executable:
      • should have _node suffix.
      • should start with `${PROJECT_NAME}
    "},{"location":"contributing/coding-guidelines/ros-nodes/directory-structure/#config-and-schema","title":"config and schema","text":"
    autoware_gnss_poser\n\u2502\u2500 config\n\u2502   \u251c\u2500 gnss_poser.param.yaml\n\u2502   \u2514\u2500 another_non_ros_config.yaml\n\u2514\u2500 schema\n    \u2514\u2500 gnss_poser.schema.json\n
    "},{"location":"contributing/coding-guidelines/ros-nodes/directory-structure/#config","title":"config","text":"
    • ROS parameters uses the extension .param.yaml.
    • Non-ROS parameters use the extension .yaml.

    Rationale: Different linting rules are used for ROS parameters and non-ROS parameters.

    "},{"location":"contributing/coding-guidelines/ros-nodes/directory-structure/#schema","title":"schema","text":"

    Place parameter definition files. See Parameters for details.

    "},{"location":"contributing/coding-guidelines/ros-nodes/directory-structure/#doc","title":"doc","text":"
    autoware_gnss_poser\n\u2514\u2500 doc\n    \u251c\u2500 foo_document.md\n    \u2514\u2500 foo_diagram.svg\n

    Place documentation files and link them from the README file.

    "},{"location":"contributing/coding-guidelines/ros-nodes/directory-structure/#include-and-src","title":"include and src","text":"
    • Unless you specifically need to export headers, you shouldn't have a include directory under the package directory.
    • For most cases, follow Not exporting headers.
    • Library packages that export headers may follow Exporting headers.
    "},{"location":"contributing/coding-guidelines/ros-nodes/directory-structure/#not-exporting-headers","title":"Not exporting headers","text":"
    autoware_gnss_poser\n\u2514\u2500 src\n    \u251c\u2500 include\n    \u2502   \u251c\u2500 gnss_poser_node.hpp\n    \u2502   \u2514\u2500 foo.hpp\n    \u2502\u2500 gnss_poser_node.cpp\n    \u2514\u2500 bar.cpp\n\nOR\n\nautoware_gnss_poser\n\u2514\u2500 src\n    \u251c\u2500 gnss_poser_node.hpp\n    \u251c\u2500 gnss_poser_node.cpp\n    \u251c\u2500 foo.hpp\n    \u2514\u2500 bar.cpp\n
    • The source file exporting the node should:
      • have _node suffix.
        • Rationale: To distinguish from other source files.
      • NOT have autoware_ prefix.
        • Rationale: To avoid verbosity.
    • See Class design for more details on how to construct gnss_poser_node.hpp and gnss_poser_node.cpp files.
    • It is up to developer how to organize the source files under src.
      • Note: The include folder under src is optional.
    "},{"location":"contributing/coding-guidelines/ros-nodes/directory-structure/#exporting-headers","title":"Exporting headers","text":"
    autoware_gnss_poser\n\u2514\u2500 include\n    \u2514\u2500 autoware\n        \u2514\u2500 gnss_poser\n            \u2514\u2500 exported_header.hpp\n
    • autoware_gnss_poser/include folder should contain ONLY the autoware folder.
      • Rationale: When installing ROS debian packages, the headers are copied to the /opt/ros/$ROS_DISTRO/include/ directory. This structure is used to avoid conflicts with non-Autoware packages.
    • autoware_gnss_poser/include/autoware folder should contain ONLY the gnss_poser folder.
      • Rationale: Similarly, this structure is used to avoid conflicts with other packages.
    • autoware_gnss_poser/include/autoware/gnss_poser folder should contain the header files to be exported.

    Note: If ament_auto_package() command is used in the CMakeLists.txt file and autoware_gnss_poser/include folder exists, this include folder will be exported to the install folder as part of ament_auto_package.cmake

    Reference: https://docs.ros.org/en/humble/How-To-Guides/Ament-CMake-Documentation.html#adding-targets

    "},{"location":"contributing/coding-guidelines/ros-nodes/directory-structure/#launch","title":"launch","text":"
    autoware_gnss_poser\n\u2514\u2500 launch\n    \u251c\u2500 gnss_poser.launch.xml\n    \u2514\u2500 gnss_poser.launch.py\n
    • You may have multiple launch files here.
    • Unless you have a specific reason, use the .launch.xml extension.
      • Rationale: While the .launch.py extension is more flexible, it comes with a readability cost.
    • Avoid autoware_ prefix in the launch file names.
      • Rationale: To avoid verbosity.
    "},{"location":"contributing/coding-guidelines/ros-nodes/directory-structure/#test","title":"test","text":"
    autoware_gnss_poser\n\u2514\u2500 test\n    \u251c\u2500 test_foo.hpp  # or place under an `include` folder here\n    \u2514\u2500 test_foo.cpp\n

    Place source files for testing. See unit testing for details.

    "},{"location":"contributing/coding-guidelines/ros-nodes/directory-structure/#python-package","title":"Python package","text":"

    Warning

    Under Construction

    "},{"location":"contributing/coding-guidelines/ros-nodes/launch-files/","title":"Launch files","text":""},{"location":"contributing/coding-guidelines/ros-nodes/launch-files/#launch-files","title":"Launch files","text":""},{"location":"contributing/coding-guidelines/ros-nodes/launch-files/#overview","title":"Overview","text":"

    Autoware use ROS 2 launch system to startup the software. Please see the official documentation to get a basic understanding about ROS 2 Launch system if you are not familiar with it.

    "},{"location":"contributing/coding-guidelines/ros-nodes/launch-files/#guideline","title":"Guideline","text":""},{"location":"contributing/coding-guidelines/ros-nodes/launch-files/#the-organization-of-launch-files-in-autoware","title":"The organization of launch files in Autoware","text":"

    Autoware mainly has two repositories related to launch file organization: the autoware.universe and the autoware_launch.

    "},{"location":"contributing/coding-guidelines/ros-nodes/launch-files/#autowareuniverse","title":"autoware.universe","text":"

    the autoware.universe contains the code of the main Autoware modules, and its launch directory is responsible for launching the nodes of each module. Autoware software stack is organized based on the architecture, so you may find that we try to match the launch structure similar to the architecture (splitting of files, namespace). For example, the tier4_map_launch subdirectory corresponds to the map module, so do the other tier4_*_launch subdirectories.

    "},{"location":"contributing/coding-guidelines/ros-nodes/launch-files/#autoware_launch","title":"autoware_launch","text":"

    The autoware_launch is a repository referring to autoware.universe. The mainly purpose of introducing this repository is to provide the general entrance to start the Autoware software stacks, i.e, calling the launch file of each module.

    • The autoware.launch.xml is the basic launch file for road driving scenarios.

      As can be seen from the content, the entire launch file is divided into several different modules, including Vehicle, System, Map, Sensing, Localization, Perception, Planning, Control, etc. By setting the launch_* argument equals to true or false , we can determine which modules to be loaded.

    • The logging_simulator.launch.xml is often used together with the recorded ROS bag to debug if the target module (e.g, Sensing, Localization or Perception) functions normally.
    • The planning_simulator.launch.xml is based on the Planning Simulator tool, mainly used for testing/validation of Planning module by simulating traffic rules, interactions with dynamic objects and control commands to the ego vehicle.
    • The e2e_simulator.launch.xml is the launcher for digital twin simulation environment.
    graph LR\nA11[logging_simulator.launch.xml]-.->A10[autoware.launch.xml]\nA12[planning_simulator.launch.xml]-.->A10[autoware.launch.xml]\nA13[e2e_simulator.launch.xml]-.->A10[autoware.launch.xml]\n\nA10-->A21[tier4_map_component.launch.xml]\nA10-->A22[xxx.launch.py]\nA10-->A23[tier4_localization_component.launch.xml]\nA10-->A24[xxx.launch.xml]\nA10-->A25[tier4_sensing_component.launch.xml]\n\nA23-->A30[localization.launch.xml]\nA30-->A31[pose_estimator.launch.xml]\nA30-->A32[util.launch.xml]\nA30-->A33[pose_twist_fusion_filter.launch.xml]\nA30-->A34[xxx.launch.xml]\nA30-->A35[twist_estimator.launch.xml]\n\nA33-->A41[stop_filter.launch.xml]\nA33-->A42[ekf_localizer.launch.xml]\nA33-->A43[twist2accel.launch.xml]
    "},{"location":"contributing/coding-guidelines/ros-nodes/launch-files/#add-a-new-package-in-autoware","title":"Add a new package in Autoware","text":"

    If a newly created package has executable node, we expect sample launch file and configuration within the package, just like the recommended structure shown in previous directory structure page.

    In order to automatically load the newly added package when starting Autoware, you need to make some necessary changes to the corresponding launch file. For example, if using ICP instead of NDT as the pointcloud registration algorithm, you can modify the autoware.universe/launch/tier4_localization_launch/launch/pose_estimator/pose_estimator.launch.xml file to load the newly added ICP package.

    "},{"location":"contributing/coding-guidelines/ros-nodes/launch-files/#parameter-management","title":"Parameter management","text":"

    Another purpose of introducing the autoware_launch repository is to facilitate the parameter management of Autoware. Thinking about this situation: if we want to integrate Autoware to a specific vehicle and modify parameters, we have to fork autoware.universe which also has a lot of code other than parameters and is frequently updated by developers. By integrating these parameters in autoware_launch, we can customize the Autoware parameters just by forking autoware_launch repository. Taking the localization module as an examples:

    1. all the \u201claunch parameters\u201d for localization component is listed in the files under autoware_launch/autoware_launch/config/localization.
    2. the \"launch parameters\" file paths are set in the autoware_launch/autoware_launch/launch/components/tier4_localization_component.launch.xml file.
    3. in autoware.universe/launch/tier4_localization_launch/launch, the launch files loads the \u201claunch parameters\u201d if the argument is given in the parameter configuration file. You can still use the default parameters in each packages to launch tier4_localization_launch within autoware.universe.
    "},{"location":"contributing/coding-guidelines/ros-nodes/message-guidelines/","title":"Message guidelines","text":""},{"location":"contributing/coding-guidelines/ros-nodes/message-guidelines/#message-guidelines","title":"Message guidelines","text":""},{"location":"contributing/coding-guidelines/ros-nodes/message-guidelines/#format","title":"Format","text":"

    All messages should follow ROS message description specification.

    The accepted formats are:

    • .msg
    • .srv
    • .action
    "},{"location":"contributing/coding-guidelines/ros-nodes/message-guidelines/#naming","title":"Naming","text":"

    Under Construction

    Use Array as a suffix when creating a plural type of a message. This suffix is commonly used in common_interfaces.

    "},{"location":"contributing/coding-guidelines/ros-nodes/message-guidelines/#default-units","title":"Default units","text":"

    All the fields by default have the following units depending on their types:

    type default unit distance meter (m) angle radians (rad) time second (s) speed m/s velocity m/s acceleration m/s\u00b2 angular vel. rad/s angular accel. rad/s\u00b2

    If a field in a message has any of these default units, don't add any suffix or prefix denoting the type.

    "},{"location":"contributing/coding-guidelines/ros-nodes/message-guidelines/#non-default-units","title":"Non-default units","text":"

    For non-default units, use following suffixes:

    type non-default unit suffix distance nanometer _nm distance micrometer _um distance millimeter _mm distance kilometer _km angle degree (deg) _deg time nanosecond _ns time microsecond _us time millisecond _ms time minute _min time hour (h) _hour velocity km/h _kmph

    If a unit that you'd like to use doesn't exist here, create an issue/PR to add it to this list.

    "},{"location":"contributing/coding-guidelines/ros-nodes/message-guidelines/#message-field-types","title":"Message field types","text":"

    For list of types supported by the ROS interfaces see here.

    Also copied here for convenience:

    Message Field Type C++ equivalent bool bool byte uint8_t char char float32 float float64 double int8 int8_t uint8 uint8_t int16 int16_t uint16 uint16_t int32 int32_t uint32 uint32_t int64 int64_t uint64 uint64_t string std::string wstring std::u16string"},{"location":"contributing/coding-guidelines/ros-nodes/message-guidelines/#arrays","title":"Arrays","text":"

    For arrays, use unbounded dynamic array type.

    Example:

    int32[] unbounded_integer_array\n
    "},{"location":"contributing/coding-guidelines/ros-nodes/message-guidelines/#enumerations","title":"Enumerations","text":"

    ROS 2 interfaces don't support enumerations directly.

    It is possible to define integers constants and assign them to a non-constant integer parameter.

    Constants are written in CONSTANT_CASE.

    Assign a different value to each element of a constant.

    Example from shape_msgs/msg/SolidPrimitive.msg

    uint8 BOX=1\nuint8 SPHERE=2\nuint8 CYLINDER=3\nuint8 CONE=4\nuint8 PRISM=5\n\n# The type of the shape\nuint8 type\n
    "},{"location":"contributing/coding-guidelines/ros-nodes/message-guidelines/#comments","title":"Comments","text":"

    On top of the message, briefly explain what the message contains and/or what it is used for. For an example, see sensor_msgs/msg/Imu.msg.

    If necessary, add line comments before the fields that explain the context and/or meaning.

    For simple fields like x, y, z, w you might not need to add comments.

    Even though it is not strictly checked, try not to pass 100 characters in a line.

    Example:

    # Number of times the vehicle performed an emergency brake\nuint32 count_emergency_brake\n\n# Seconds passed since the last emergency brake\nuint64 duration\n
    "},{"location":"contributing/coding-guidelines/ros-nodes/message-guidelines/#example-usages","title":"Example usages","text":"
    • Don't use unit suffixes for default types:
      • Bad: float32 path_length_m
      • Good: float32 path_length
    • Don't prefix the units:
      • Bad: float32 kmph_velocity_vehicle
      • Good: float32 velocity_vehicle_kmph
    • Use recommended suffixes if they are available in the table:
      • Bad: float32 velocity_vehicle_km_h
      • Good: float32 velocity_vehicle_kmph
    "},{"location":"contributing/coding-guidelines/ros-nodes/parameters/","title":"Parameters","text":""},{"location":"contributing/coding-guidelines/ros-nodes/parameters/#parameters","title":"Parameters","text":"

    Autoware ROS nodes have declared parameters which values are provided during the node start up in the form of a parameter file. All the expected parameters with corresponding values should exist in the parameter file. Depending on the application, the parameter values might need to be modified.

    Find more information on parameters from the official ROS documentation:

    • Understanding ROS 2 Parameters
    • About ROS 2 Parameters
    "},{"location":"contributing/coding-guidelines/ros-nodes/parameters/#workflow","title":"Workflow","text":"

    A ROS package which uses the declare_parameter(...) function should:

    • use the declare_parameter(...) without a default value
    • create a parameter file
    • create a schema file

    The rationale behind this workflow is to have a verified single source of truth to pass to the ROS node and to be used in the web documentation. The approach reduces the risk of using invalid parameter values and makes maintenance of documentation easier. This is achieved by:

    • declare_parameter(...) throws an exception if an expected parameter is missing in the parameter file
    • the schema validates the parameter file in the CI and renders a parameter table, as depicted in the graphics below

      flowchart TD\n    NodeSchema[Schema file: *.schema.json]\n    ParameterFile[Parameter file: *.param.yaml]\n    WebDocumentation[Web documentation table]\n\n    NodeSchema -->|Validation| ParameterFile\n    NodeSchema -->|Generate| WebDocumentation

    Note: a parameter value can still be modified and bypass the validation, as there is no validation during runtime.

    "},{"location":"contributing/coding-guidelines/ros-nodes/parameters/#declare-parameter-function","title":"Declare Parameter Function","text":"

    It is the declare_parameter(...) function which sets the parameter values during a node startup.

    declare_parameter<INSERT_TYPE>(\"INSERT_PARAMETER_1_NAME\"),\ndeclare_parameter<INSERT_TYPE>(\"INSERT_PARAMETER_N_NAME\")\n

    As there is no default_value provided, the function throws an exception if a parameter were to be missing in the provided *.param.yaml file. Use a type from the C++ Type column in the table below for the declare_parameter(...) function, replacing INSERT_TYPE.

    ParameterType Enum C++ Type PARAMETER_BOOL bool PARAMETER_INTEGER int64_t PARAMETER_DOUBLE double PARAMETER_STRING std::string PARAMETER_BYTE_ARRAY std::vector<uint8_t> PARAMETER_BOOL_ARRAY std::vector<bool> PARAMETER_INTEGER_ARRAY std::vector<int64_t> PARAMETER_DOUBLE_ARRAY std::vector<double> PARAMETER_STRING_ARRAY std::vector<std::string>

    The table has been derived from Parameter Type and Parameter Value.

    See example: Lidar Apollo Segmentation TVM Nodes declare function

    "},{"location":"contributing/coding-guidelines/ros-nodes/parameters/#parameter-file","title":"Parameter File","text":"

    The parameter file is minimal as there is no need to provide the user with additional information, e.g., description or type. This is because the associated schema file provides the additional information. Use the template below as a starting point for a ROS node.

    /**:\nros__parameters:\nINSERT_PARAMETER_1_NAME: INSERT_PARAMETER_1_VALUE\nINSERT_PARAMETER_N_NAME: INSERT_PARAMETER_N_VALUE\n

    Note: /** is used instead of the explicit node namespace, this allows the parameter file to be passed to a ROS node which has been remapped.

    To adapt the template to the ROS node, replace each INSERT_PARAMETER_..._NAME and INSERT_PARAMETER_..._VALUE for all parameters. Each declare_parameter(...) takes one parameter as input. All the parameter files should have the .param.yaml suffix so that the auto-format can be applied properly.

    Autoware has the following two types of parameter files for ROS packages:

    • Node parameter file
      • Node parameter files store the default parameters provided for each package in Autoware.
        • For example, the parameter of behavior_path_planner
      • All nodes in Autoware must have a parameter file if ROS parameters are declared in the node.
      • For FOO_package, the parameter is expected to be stored in FOO_package/config.
      • The launch file for individual packages must load node parameter by default:
    <launch>\n<arg name=\"foo_node_param_path\" default=\"$(find-pkg-share FOO_package)/config/foo_node.param.yaml\" />\n\n<node pkg=\"FOO_package\" exec=\"foo_node\">\n...\n    <param from=\"$(var foo_node_param_path)\" />\n</node>\n</launch>\n
    • Launch parameter file
      • When a user creates a launch package for the user's vehicle, the user should copy node parameter files for the nodes that are called in the launch file as \"launch parameter files\".
      • Launch parameter files are then customized specifically for user's vehicle.
        • For example, the customized parameter of behavior_path_planner stored under autoware_launch
      • The examples for launch parameter files are stored under autoware_launch.
    "},{"location":"contributing/coding-guidelines/ros-nodes/parameters/#json-schema","title":"JSON Schema","text":"

    JSON Schema is used the validate the parameter file(s) ensuring that it has the correct structure and content. Using JSON Schema for this purpose is considered best practice for cloud-native development. The schema template below shall be used as a starting point when defining the schema for a ROS node.

    {\n\"$schema\": \"http://json-schema.org/draft-07/schema#\",\n\"title\": \"INSERT_TITLE\",\n\"type\": \"object\",\n\"definitions\": {\n\"INSERT_ROS_NODE_NAME\": {\n\"type\": \"object\",\n\"properties\": {\n\"INSERT_PARAMETER_1_NAME\": {\n\"type\": \"INSERT_TYPE\",\n\"description\": \"INSERT_DESCRIPTION\",\n\"default\": \"INSERT_DEFAULT\",\n\"INSERT_BOUND_CONDITION(S)\": INSERT_BOUND_VALUE(S)\n},\n\"INSERT_PARAMETER_N_NAME\": {\n\"type\": \"INSERT_TYPE\",\n\"description\": \"INSERT_DESCRIPTION\",\n\"default\": \"INSERT_DEFAULT\",\n\"INSERT_BOUND_CONDITION(S)\": INSERT_BOUND_VALUE(S)\n}\n},\n\"required\": [\"INSERT_PARAMETER_1_NAME\", \"INSERT_PARAMETER_N_NAME\"],\n\"additionalProperties\": false\n}\n},\n\"properties\": {\n\"/**\": {\n\"type\": \"object\",\n\"properties\": {\n\"ros__parameters\": {\n\"$ref\": \"#/definitions/INSERT_ROS_NODE_NAME\"\n}\n},\n\"required\": [\"ros__parameters\"],\n\"additionalProperties\": false\n}\n},\n\"required\": [\"/**\"],\n\"additionalProperties\": false\n}\n

    The schema file path is INSERT_PATH_TO_PACKAGE/schema/ and the schema file name is INSERT_NODE_NAME.schema.json. To adapt the template to the ROS node, replace each INSERT_... and add all parameters 1..N.

    See example: Image Projection Based Fusion - Pointpainting schema

    "},{"location":"contributing/coding-guidelines/ros-nodes/parameters/#attributes","title":"Attributes","text":"

    Parameters have several attributes, some are required and some optional. The optional attributes are highly encouraged when applicable, as they provide useful information about a parameter and can ensure the value of the parameter is within its bounds.

    "},{"location":"contributing/coding-guidelines/ros-nodes/parameters/#required","title":"Required","text":"
    • name
    • type
      • see JSON Schema types
    • description
    "},{"location":"contributing/coding-guidelines/ros-nodes/parameters/#optional","title":"Optional","text":"
    • default
      • a tested and verified value, see JSON Schema default
    • bound(s)
      • type dependent, e.g., integer, range and size
    "},{"location":"contributing/coding-guidelines/ros-nodes/parameters/#tips-and-tricks","title":"Tips and Tricks","text":"

    Using well established standards enables the use of conventional tooling. Below is an example of how to link a schema to the parameter file(s) using VS Code. This enables a developer with convenient features such as auto-complete and parameter bound validation.

    In the root directory of where the project is hosted, create a .vscode folder with two files; extensions.json containing

    {\n\"recommendations\": [\"redhat.vscode-yaml\"]\n}\n

    and settings.json containing

    {\n\"yaml.schemas\": {\n\"./INSERT_PATH_TO_PACKAGE/schema/INSERT_NODE_NAME.schema.json\": \"**/INSERT_NODE_NAME/config/*.param.yaml\"\n}\n}\n

    The RedHat YAML extension enables validation of YAML files using JSON Schema and the \"yaml.schemas\" setting associates the *.schema.json file with all *.param.yaml files in the config/ folder.

    "},{"location":"contributing/coding-guidelines/ros-nodes/task-scheduling/","title":"Task scheduling","text":""},{"location":"contributing/coding-guidelines/ros-nodes/task-scheduling/#task-scheduling","title":"Task scheduling","text":"

    Warning

    Under Construction

    "},{"location":"contributing/coding-guidelines/ros-nodes/topic-namespaces/","title":"Topic namespaces","text":""},{"location":"contributing/coding-guidelines/ros-nodes/topic-namespaces/#topic-namespaces","title":"Topic namespaces","text":""},{"location":"contributing/coding-guidelines/ros-nodes/topic-namespaces/#overview","title":"Overview","text":"

    ROS allows topics, parameters and nodes to be namespaced which provides the following benefits:

    • Multiple instances of the same node type will not cause naming clashes.
    • Topics published by a node can be automatically namespaced with the node's namespace providing a meaningful and easily-visible connection.
    • Keeps from cluttering the root namespace.
    • Helps to maintain separation-of-concerns.

    This page focuses on how to use namespaces in Autoware and shows some useful examples. For basic information on topic namespaces, refer to this tutorial.

    "},{"location":"contributing/coding-guidelines/ros-nodes/topic-namespaces/#how-topics-should-be-named-in-node","title":"How topics should be named in node","text":"

    Autoware divides the node into the following functional categories, and adds the start namespace for the nodes according to the categories.

    • localization
    • perception
    • planning
    • control
    • sensing
    • vehicle
    • map
    • system

    When a node is run in a namespace, all topics which that node publishes are given that same namespace. All nodes in the Autoware stack must support namespaces by avoiding practices such as publishing topics in the global namespace.

    In general, topics should be namespaced based on the function of the node which produces them and not the node (or nodes) which consume them.

    Classify topics as input or output topics based on they are subscribed or published by the node. In the node, input topic is named input/topic_name and output topic is named output/topic_name.

    Configure the topic in the node's launch file. Take the joy_controller node as an example, in the following example, set the input and output topics and remap topics in the joy_controller.launch.xml file.

    <launch>\n<arg name=\"input_joy\" default=\"/joy\"/>\n<arg name=\"input_odometry\" default=\"/localization/kinematic_state\"/>\n\n<arg name=\"output_control_command\" default=\"/external/$(var external_cmd_source)/joy/control_cmd\"/>\n<arg name=\"output_external_control_command\" default=\"/api/external/set/command/$(var external_cmd_source)/control\"/>\n<arg name=\"output_shift\" default=\"/api/external/set/command/$(var external_cmd_source)/shift\"/>\n<arg name=\"output_turn_signal\" default=\"/api/external/set/command/$(var external_cmd_source)/turn_signal\"/>\n<arg name=\"output_heartbeat\" default=\"/api/external/set/command/$(var external_cmd_source)/heartbeat\"/>\n<arg name=\"output_gate_mode\" default=\"/control/gate_mode_cmd\"/>\n<arg name=\"output_vehicle_engage\" default=\"/vehicle/engage\"/>\n\n<node pkg=\"joy_controller\" exec=\"joy_controller\" name=\"joy_controller\" output=\"screen\">\n<remap from=\"input/joy\" to=\"$(var input_joy)\"/>\n<remap from=\"input/odometry\" to=\"$(var input_odometry)\"/>\n\n<remap from=\"output/control_command\" to=\"$(var output_control_command)\"/>\n<remap from=\"output/external_control_command\" to=\"$(var output_external_control_command)\"/>\n<remap from=\"output/shift\" to=\"$(var output_shift)\"/>\n<remap from=\"output/turn_signal\" to=\"$(var output_turn_signal)\"/>\n<remap from=\"output/gate_mode\" to=\"$(var output_gate_mode)\"/>\n<remap from=\"output/heartbeat\" to=\"$(var output_heartbeat)\"/>\n<remap from=\"output/vehicle_engage\" to=\"$(var output_vehicle_engage)\"/>\n</node>\n</launch>\n
    "},{"location":"contributing/coding-guidelines/ros-nodes/topic-namespaces/#topic-names-in-the-code","title":"Topic names in the code","text":"
    1. Have ~ so that namespace in launch configuration is applied(should not start from root /).

    2. Have ~/input ~/output namespace before topic name used to communicate with other nodes.

      e.g., In node obstacle_avoidance_planner, using topic names of type ~/input/topic_name to subscribe to topics.

      objects_sub_ = create_subscription<PredictedObjects>(\n\"~/input/objects\", rclcpp::QoS{10},\nstd::bind(&ObstacleAvoidancePlanner::onObjects, this, std::placeholders::_1));\n

      e.g., In node obstacle_avoidance_planner, using topic names of type ~/output/topic_name to publish topic.

      traj_pub_ = create_publisher<Trajectory>(\"~/output/path\", 1);\n
    3. Visualization or debug purpose topics should have ~/debug/ namespace.

      e.g., In node obstacle_avoidance_planner, in order to debug or visualizing topics, using topic names of type ~/debug/topic_name to publish information.

      debug_markers_pub_ =\ncreate_publisher<visualization_msgs::msg::MarkerArray>(\"~/debug/marker\", durable_qos);\n\ndebug_msg_pub_ =\ncreate_publisher<tier4_debug_msgs::msg::StringStamped>(\"~/debug/calculation_time\", 1);\n

      The launch configured namespace will be add the topics before, so the topic names will be as following:

      /planning/scenario_planning/lane_driving/motion_planning/obstacle_avoidance_planner/debug/marker /planning/scenario_planning/lane_driving/motion_planning/obstacle_avoidance_planner/debug/calculation_time

    4. Rationale: we want to make topic names remapped and configurable from launch files.

    "},{"location":"contributing/coding-guidelines/ros-nodes/topic-message-handling/","title":"Topic message handling guideline","text":""},{"location":"contributing/coding-guidelines/ros-nodes/topic-message-handling/#topic-message-handling-guideline","title":"Topic message handling guideline","text":""},{"location":"contributing/coding-guidelines/ros-nodes/topic-message-handling/#introduction","title":"Introduction","text":"

    Here is coding guideline for topic message handling in Autoware. It includes the recommended manner than conventional one, which is roughly explained in Discussions page. Refer to the page to understand the basic concept of the recommended manner. You can find sample source code in ros2_subscription_examples referred from this document.

    "},{"location":"contributing/coding-guidelines/ros-nodes/topic-message-handling/#conventional-message-handling-manner","title":"Conventional message handling manner","text":"

    At first, let us see a conventional manner of handling messages that is commonly used. ROS 2 Tutorials is one of the most cited references for ROS 2 applications, including Autoware. It implicitly recommends that each of messages received by subscriptions should be referred to and processed by a dedicated callback function. Autoware follows that manner thoroughly.

      steer_sub_ = create_subscription<SteeringReport>(\n\"input/steering\", 1,\n[this](SteeringReport::SharedPtr msg) { current_steer_ = msg->steering_tire_angle; });\n

    In the code above, when a topic message whose name is input/steering is received, an anonymous function whose description is {current_steer_ = msg->steering_tier_angle;} is executed as a callback in a thread. The callback function is always executed when the message is received, which leads to waste computing resource if the message is not always necessary. Besides, waking up a thread costs computational overhead.

    "},{"location":"contributing/coding-guidelines/ros-nodes/topic-message-handling/#recommended-manner","title":"Recommended manner","text":"

    This section introduces a recommended manner to take a message using Subscription->take() method only when the message is needed. The sample code given below shows that Subscription->take() method is called during execution of any callback function. In most cases, Subscription->take() method is called before a received message is consumed by a main logic. In this case, a topic message is retrieved from the subscription queue, the queue embedded in the subscription object, instead of using a callback function. To be precise, you have to program your code so that a callback function is not automatically called.

      SteeringReport msg;\nrclcpp::MessageInfo msg_info;\nif (sub_->take(msg, msg_info)) {\n// processing and publishing after this\n

    Using this manner has the following benefits.

    • It can reduce the number of calls to subscription callback functions
    • There is no need to take a topic message from a subscription that a main logic does not consume
    • There is no mandatory thread waking for the callback function, which leads to multi-threaded programming, data races and exclusive locking
    "},{"location":"contributing/coding-guidelines/ros-nodes/topic-message-handling/#manners-to-handle-topic-message-data","title":"Manners to handle topic message data","text":"

    This section introduces four manners, including the recommended ones.

    "},{"location":"contributing/coding-guidelines/ros-nodes/topic-message-handling/#1-obtain-data-by-calling-subscription-take","title":"1. Obtain data by calling Subscription->take()","text":"

    To use the recommended manner using Subscription->take(), you basically need to do two things below.

    1. Prevent a callback function from being called when a topic message is received
    2. Call take() method of a subscription object when a topic message is needed

    You can see an example of the typical use of take() method in ros2_subscription_examples/simple_examples/src/timer_listener.cpp.

    "},{"location":"contributing/coding-guidelines/ros-nodes/topic-message-handling/#prevent-calling-a-callback-function","title":"Prevent calling a callback function","text":"

    To prevent a callback function from being called automatically, the callback function has to belong a callback group whose callback functions are not added to any executor. According to the API specification of create_subscription, registering a callback function to a rclcpp::Subscription based object is mandatory even if the callback function has no operation. Here is a sample code snippet from ros2_subscription_examples/simple_examples/src/timer_listener.cpp.

        rclcpp::CallbackGroup::SharedPtr cb_group_not_executed = this->create_callback_group(\nrclcpp::CallbackGroupType::MutuallyExclusive, false);\nauto subscription_options = rclcpp::SubscriptionOptions();\nsubscription_options.callback_group = cb_group_not_executed;\n\nrclcpp::QoS qos(rclcpp::KeepLast(10));\nif (use_transient_local) {\nqos = qos.transient_local();\n}\n\nsub_ = create_subscription<std_msgs::msg::String>(\"chatter\", qos, not_executed_callback, subscription_options);\n

    In the code above, cb_group_not_executed is created by calling create_callback_group with the second argument false. Any callback function which belongs to the callback group will not be called by an executor. If the callback_group member of subscription_options is set to cb_group_not_executed, then not_executed_callback will not be called when a corresponding topic message chatter is received. The second argument to create_callback_group is defined as follows.

    rclcpp::CallbackGroup::SharedPtr create_callback_group(rclcpp::CallbackGroupType group_type, \\\n                                  bool automatically_add_to_executor_with_node = true)\n

    When automatically_add_to_executor_with_node is set to true, callback functions included in a node that is added to an executor will be automatically called by the executor.

    "},{"location":"contributing/coding-guidelines/ros-nodes/topic-message-handling/#call-take-method-of-subscription-object","title":"Call take() method of Subscription object","text":"

    To take a topic message from the Subscription based object, the take() method is called at the expected time. Here is a sample code snippet from ros2_subscription_examples/simple_examples/src/timer_listener.cpp using take() method.

      std_msgs::msg::String msg;\nrclcpp::MessageInfo msg_info;\nif (sub_->take(msg, msg_info)) {\nRCLCPP_INFO(this->get_logger(), \"Catch message\");\nRCLCPP_INFO(this->get_logger(), \"I heard: [%s]\", msg.data.c_str());\n

    In the code above, take(msg, msg_info) is called by sub_ object instantiated from the rclcpp::Subscription class. It is called in a timer driven callback function. msg and msg_info indicate a message body and its metadata respectively. If there is a message in the subscription queue when take(msg, msg_info) is called, then the message is copied to msg. take(msg, msg_info) returns true if a message is successfully taken from the subscription. In this case, the above code prints out a string data of the message from RCLCPP_INFO. take(msg, msg_info) returns false if a message is not taken from the subscription. When take(msg, msg_info) is called, if the size of the subscription queue is greater than one and there are two or more messages in the queue, then the oldest message is copied to msg. If the size of the queue is one, the latest message is always obtained.

    Note

    You can check the presence of incoming message with the returned value of take() method. However, you have to take care of the destructive nature of the take() method. The take() method modifies the subscription queue. Also, the take() method is irreversible and there is no undo operation against the take() method. Checking the incoming message with only the take() method always changes the subscription queue. If you want to check without changing the subscription queue, rclcpp::WaitSet is recommended. Refer to [supplement] Use rclcpp::WaitSet for more detail.

    Note

    The take() method is supported to only obtain a message which is passed through DDS as an inter-process communication. You must not use it for an intra-process communication because intra-process communication is based on another software stack of rclcpp. Refer to [supplement] Obtain a received message through intra-process communication in case of intra-process communication.

    "},{"location":"contributing/coding-guidelines/ros-nodes/topic-message-handling/#11-obtain-serialized-message-from-subscription","title":"1.1 Obtain Serialized Message from Subscription","text":"

    ROS 2 provides Serialized Message function which supports communication with arbitrary message types as described in Class SerializedMessage. It is used by topic_state_monitor in Autoware. You have to use the take_serialized() method instead of the take() method to obtain a rclcpp::SerializedMessage based message from a subscription.

    Here is a sample code snippet from ros2_subscription_examples/simple_examples/src/timer_listener_serialized_message.cpp.

          // receive the serialized message.\nrclcpp::MessageInfo msg_info;\nauto msg = sub_->create_serialized_message();\n\nif (sub_->take_serialized(*msg, msg_info) == false) {\nreturn;\n}\n

    In the code above, msg is created by create_serialized_message() to store a received message, whose type is std::shared_ptr<rclcpp::SerializedMessage>. You can obtain a message of type rclcpp::SerializedMessage using the take_serialized() method. Note that the take_serialized() method needs reference type data as its first argument. Since msg is a pointer, *msg should be passed as the first argument to the `take_serialized().

    Note

    ROS 2's rclcpp supports both rclcpp::LoanedMessage and rclcpp::SerializedMessage. If zero copy communication via loaned messages is introduced to Autoware, take_loaned() method should be used for communication via loaned messages instead. In this document, the explanation of the take_loaned() method is omitted because it is not used for Autoware in this time (May. 2024).

    "},{"location":"contributing/coding-guidelines/ros-nodes/topic-message-handling/#2-obtain-multiple-data-stored-in-subscription-queue","title":"2. Obtain multiple data stored in Subscription Queue","text":"

    A subscription object can hold multiple messages in its queue if multiple queue size is configured with the QoS setting. The conventional manner using callback function forces a callback function to be executed per message. In other words, there is a constraint; a single cycle of callback function processes a single message . Note that with the conventional manner, if there are one or more messages in the subscription queue, the oldest one is taken and a thread is assigned to execute a callback function, which continues until the queue is empty. The take() method would alleviate this limitation. The take() method can be called in multiple iterations, so that a single cycle of the callback function processes multiple messages taken by take() methods.

    Here is a sample code, taken from ros2_subscription_examples/simple_examples/src/timer_batch_listener.cpp which calls the take() method in a single cycle of a callback function.

          std_msgs::msg::String msg;\nrclcpp::MessageInfo msg_info;\nwhile (sub_->take(msg, msg_info))\n{\nRCLCPP_INFO(this->get_logger(), \"Catch message\");\nRCLCPP_INFO(this->get_logger(), \"I heard: [%s]\", msg.data.c_str());\n

    In the code above, while(sub->take(msg, msg_info)) continues to take messages from the subscription queue until the queue is empty. Each message taken is processed per iteration. Note that you must determine size of a subscription queue by considering both frequency of a callback function and frequency of a message reception. For example, if a callback function is invoked at 10Hz and topic messages are received at 50Hz, the size of the subscription queue must be at least 5 to avoid losing received messages.

    Assigning a thread to execute a callback function per message will cause performance overhead. You can use the manner introduced in this section to avoid the unexpected overhead. The manner will be effective when there is a large difference between reception frequency and consumption frequency. For example, even if a message, such as a CAN message, is received at higher than 100 Hz, a user logic consumes messages at slower frequency such as 10 Hz. In such a case, the user logic should retrieve the required number of messages with the take() method to avoid the unexpected overhead.

    "},{"location":"contributing/coding-guidelines/ros-nodes/topic-message-handling/#3-obtain-data-by-calling-subscription-take-and-then-call-a-callback-function","title":"3. Obtain data by calling Subscription->take and then call a callback function","text":"

    You can combine the take() (strictly take_type_erased()) method and the callback function to process received messages in a consistent way. Using this combination does not require waking up a thread. Here is a sample code snippet from ros2_subscription_examples/simple_examples/src/timer_listener_using_callback.cpp.

          auto msg = sub_->create_message();\nrclcpp::MessageInfo msg_info;\nif (sub_->take_type_erased(msg.get(), msg_info)) {\nsub_->handle_message(msg, msg_info);\n

    In the code above, a message is taken by the take_type_erased() method before a registered callback function is called via the handle_message() method. Note that you must use take_type_erased() instead of take(). take_type_erased() needs void type data as its first argument. You must use the get() method to convert msg whose type is shared_ptr<void> to void type. Then the handle_message() method is called with the obtained message. A registered callback function is called within handle_message(). You don't need to take care of message type which is passed to take_type_erased() and handle_message(). You can define the message variable as auto msg = sub_->create_message();. You can also refer to the API document as for create_message(), take_type_erased() and handle_message().

    "},{"location":"contributing/coding-guidelines/ros-nodes/topic-message-handling/#4-obtain-data-by-a-callback-function","title":"4. Obtain data by a callback function","text":"

    A conventional manner, typically used in ROS 2 application, is a message reference using a callback function, is available. If you don't use a callback group with automatically_add_to_executor_with_node = false, a registered callback function will be called automatically by an executor when a topic message is received. One of the advantages of this manner is that you don't have to take care whether a topic message is passed through inter-process or intra-process. Remember that take() can only be used for inter-process communication via DDS, while another manner provided by rclcpp can be used for intra-process communication via rclcpp.

    "},{"location":"contributing/coding-guidelines/ros-nodes/topic-message-handling/#appendix","title":"Appendix","text":"

    A callback function is used to obtain a topic message in many of ROS 2 applications. It is as like a rule or a custom. As this document page explains, you can use the Subscription->take() method to obtain a topic message without calling a subscription callback function. This manner is also documented in Template Class Subscription \u2014 rclcpp 16.0.8 documentation.

    Many of ROS 2 users may be afraid to use the take() method because they may not be so familiar with it and there is a lack of documentation about take(), but it is widely used in the rclcpp::Executor implementation as shown in rclcpp/executor.cpp shown below. So it turns out that you are indirectly using the take() method, whether you know it or not.

        std::shared_ptr<void> message = subscription->create_message();\ntake_and_do_error_handling(\n\"taking a message from topic\",\nsubscription->get_topic_name(),\n[&]() {return subscription->take_type_erased(message.get(), message_info);},\n[&]() {subscription->handle_message(message, message_info);});\n

    Note

    Strictly speaking, the take_type_erased() method is called in the executor, but not the take() method.

    But take_type_erased() is the embodiment of take(), while take() internally calls take_type_erased().

    If rclcpp::Executor based object, an executor, is programmed to call a callback function, the executor itself determines when to do it. Because the executor is essentially calling a best-effort callback function, the message is not guaranteed to be necessarily referenced or processed even though it is received. Therefore it is desirable to call the take() method directly to ensure that a message is referenced or processed at the intended time.

    As of May 2024, the recommended manners are beginning to be used in Autoware Universe. See the following PR if you want an example in Autoware Universe.

    feat(tier4_autoware_utils, obstacle_cruise): change to read topic by polling #6702

    "},{"location":"contributing/coding-guidelines/ros-nodes/topic-message-handling/supp-intra-process-comm/","title":"[supplement] Obtain a received message through intra-process communication","text":""},{"location":"contributing/coding-guidelines/ros-nodes/topic-message-handling/supp-intra-process-comm/#supplement-obtain-a-received-message-through-intra-process-communication","title":"[supplement] Obtain a received message through intra-process communication","text":""},{"location":"contributing/coding-guidelines/ros-nodes/topic-message-handling/supp-intra-process-comm/#topic-message-handling-in-intra-process-communication","title":"Topic message handling in intra-process communication","text":"

    rclcpp supports intra-process communication. As explained in Topic message handling guideline, take() method can not be used in the case of intra-process communication. take() can not return a topic message which is received through inter-process communication. However, methods for intra-process communication are provided, similar to the methods for inter-process communication described in obtain data by calling Subscription->take and then call a callback function. take_data() method is provided to obtain a received data in the case of intra-process communication and the received data must be processed through execute() method. The return value of take_data() is based on the complicated data structure, execute() method should be used along with take_data() method. Refer to Template Class SubscriptionIntraProcess \u2014 rclcpp 16.0.8 documentation for take_data() and execute() for more detail.

    "},{"location":"contributing/coding-guidelines/ros-nodes/topic-message-handling/supp-intra-process-comm/#coding-manner","title":"Coding manner","text":"

    To handle messages via intra-process communication, call take_data() method and then execute() method as below.

    // Execute any entities of the Waitable that may be ready\nstd::shared_ptr<void> data = waitable.take_data();\nwaitable.execute(data);\n

    Here is a sample program in ros2_subscription_examples/intra_process_talker_listener/src/timer_listener_intra_process.cpp at main \u00b7 takam5f2/ros2_subscription_examples. You can run the program as below. If you set true to use_intra_process_comms, intra-process communication is performed, while if you set false, inter-process communication is performed.

    ros2 intra_process_talker_listener talker_listener_intra_process.launch.py use_intra_process_comms:=true\n

    Here is a snippet of ros2_subscription_examples/intra_process_talker_listener/src/timer_listener_intra_process.cpp at main \u00b7 takam5f2/ros2_subscription_examples.

          // check if intra-process communication is enabled.\nif (this->get_node_options().use_intra_process_comms()){\n\n// get the intra-process subscription's waitable.\nauto intra_process_sub = sub_->get_intra_process_waitable();\n\n// check if the waitable has data.\nif (intra_process_sub->is_ready(nullptr) == true) {\n\n// take the data and execute the callback.\nstd::shared_ptr<void> data = intra_process_sub->take_data();\n\nRCLCPP_INFO(this->get_logger(), \" Intra-process communication is performed.\");\n\n// execute the callback.\nintra_process_sub->execute(data);\n

    Below is a line-by-line explanation of the above code.

    • if (this->get_node_options().use_intra_process_comms()){

      • The statement checks whether or not intra-process communication is enabled or not by using NodeOptions
    • auto intra_process_sub = sub_->get_intra_process_waitable();

      • The statement means to get an embodied object which performs intra-process communication
    • if (intra_process_sub->is_ready(nullptr) == true) {

      • The statement checks if a message has already been received through intra-process communication
      • The argument of is_ready() is of type rcl_wait_set_t type, but because the argument is not used within is_ready(), nullptr is used for the moment.
        • Using nullptr is currently a workaround, as it has no intent.
    • std::shared_ptr<void> data = intra_process_sub->take_data();

      • This statement means to obtain a topic message from subscriptions for intra-process communication.
      • intra_process_sub->take_data() does not return a boolean value indicating whether a message is received successfully or not, so it is necessary to check this by calling is_ready() beforehand
    • intra_process_sub->execute(data);
      • A callback function corresponding to the received message is called within execute()
      • The callback function is executed by the thread that calls execute() without a context switch
    "},{"location":"contributing/coding-guidelines/ros-nodes/topic-message-handling/supp-wait_set/","title":"[supplement] Use rclcpp::WaitSet","text":""},{"location":"contributing/coding-guidelines/ros-nodes/topic-message-handling/supp-wait_set/#supplement-use-rclcppwaitset","title":"[supplement] Use rclcpp::WaitSet","text":""},{"location":"contributing/coding-guidelines/ros-nodes/topic-message-handling/supp-wait_set/#what-is-rclcppwaitset","title":"What is rclcpp::WaitSet","text":"

    As explained in call take() method of Subscription object, the take() method is irreversible. Once the take() method is executed, a state of a subscription object changes. Because there is no undo operation against the take() method, the subscription object can not be restored to its previous state. You can use the rclcpp::WaitSet before calling the take() to check the arrival of an incoming message in the subscription queue. The following sample code shows how the wait_set_.wait() tells you that a message has already been received and can be obtained by the take().

          auto wait_result = wait_set_.wait(std::chrono::milliseconds(0));\nif (wait_result.kind() == rclcpp::WaitResultKind::Ready &&\nwait_result.get_wait_set().get_rcl_wait_set().subscriptions[0]) {\nsub_->take(msg, msg_info);\nRCLCPP_INFO(this->get_logger(), \"Catch message\");\nRCLCPP_INFO(this->get_logger(), \"I heard: [%s]\", msg.data.c_str());\n

    A single rclcpp::WaitSet object is able to observe multiple subscription objects. If there are multiple subscriptions for different topics, you can check the arrival of incoming messages per subscription. Algorithms used in the field of autonomous robots requires multiple incoming messages, such as sensor data or actuation state. Using rclcpp::WaitSet for the multiple subscriptions, they are able to check whether or not required messages have arrived without taking any message.

          auto wait_result = wait_set_.wait(std::chrono::milliseconds(0));\nbool received_all_messages = false;\nif (wait_result.kind() == rclcpp::WaitResultKind::Ready) {\nfor (auto wait_set_subs : wait_result.get_wait_set().get_rcl_wait_set().subscriptions) {\nif (!wait_set_subs) {\nRCLCPP_INFO_THROTTLE(get_logger(), clock, 5000, \"Waiting for data...\");\nreturn {};\n}\n}\nreceived_all_mesages = true;\n}\n

    In the code above, unless rclcpp::WaitSet is used, it is impossible to verify the arrival of all needed messages without changing state of the subscription objects.

    "},{"location":"contributing/coding-guidelines/ros-nodes/topic-message-handling/supp-wait_set/#coding-manner","title":"Coding manner","text":"

    This section explains how to code using rclcpp::WaitSet with a sample code below.

    • ros2_subscription_examples/waitset_examples/src/talker_triple.cpp at main \u00b7 takam5f2/ros2_subscription_examples
      • It periodically publishes /chatter every second, /slower_chatter every two seconds, and /slowest_chatter every three seconds.
    • ros2_subscription_examples/waitset_examples/src/timer_listener_triple_async.cpp at main \u00b7 takam5f2/ros2_subscription_examples
      • It queries WaitSet per one second and if there is a message available, it obtains the message with take()
      • It has three subscriptions for /chatter /slower_chatter, and /slower_chatter

    The following three steps are required to use rclcpp::WaitSet.

    "},{"location":"contributing/coding-guidelines/ros-nodes/topic-message-handling/supp-wait_set/#1-declare-and-initialize-waitset","title":"1. Declare and initialize WaitSet","text":"

    You must first instantiate a rclcpp::WaitSet based object. Below is a snippet from ros2_subscription_examples/waitset_examples/src/timer_listener_triple_async.cpp at main \u00b7 takam5f2/ros2_subscription_examples.

    rclcpp::WaitSet wait_set_;\n
    Note

    There are several types of classes similar to the rclcpp::WaitSet. The rclcpp::WaitSet object can be configured during runtime. It is not thread-safe as explained in the API specification of rclcpp::WaitSet The thread-safe classes that are replacements for rclcpp::WaitSet' are provided by therclcpp' package as listed below.

    • Typedef rclcpp::ThreadSafeWaitSet
      • Subscription, timer, etc. can only be registered to ThreadSafeWaitSet only in thread-safe state
      • Sample code is here: examples/rclcpp/wait_set/src/thread_safe_wait_set.cpp at rolling \u00b7 ros2/examples
    • Typedef rclcpp::StaticWaitSet
      • Subscription, timer, etc. can be registered to rclcpp::StaticWaitSet only at initialization
      • Here are sample code:
        • ros2_subscription_examples/waitset_examples/src/timer_listener_twin_static.cpp at main \u00b7 takam5f2/ros2_subscription_examples
        • examples/rclcpp/wait_set/src/static_wait_set.cpp at rolling \u00b7 ros2/examples
    "},{"location":"contributing/coding-guidelines/ros-nodes/topic-message-handling/supp-wait_set/#2-register-trigger-subscription-timer-and-so-on-to-waitset","title":"2. Register trigger (Subscription, Timer, and so on) to WaitSet","text":"

    You need to register a trigger to the rclcpp::WaitSet based object. The following is a snippet from ros2_subscription_examples/waitset_examples/src/timer_listener_triple_async.cpp at main \u00b7 takam5f2/ros2_subscription_examples

        subscriptions_array_[0] = create_subscription<std_msgs::msg::String>(\"chatter\", qos, not_executed_callback, subscription_options);\nsubscriptions_array_[1] = create_subscription<std_msgs::msg::String>(\"slower_chatter\", qos, not_executed_callback, subscription_options);\nsubscriptions_array_[2] = create_subscription<std_msgs::msg::String>(\"slowest_chatter\", qos, not_executed_callback, subscription_options);\n\n// Add subscription to waitset\nfor (auto & subscription : subscriptions_array_) {\nwait_set_.add_subscription(subscription);\n}\n

    In the code above, the add_subscription() method registers the created subscriptions with the wait_set_ object. A rclcpp::WaitSet-based object basically handles objects each of which has a corresponding callback function. Not onlySubscriptionbased objects, but also Timer,ServiceorActionbased objects can be observed by a rclcpp::WaitSet based object. A singlerclcpp::WaitSet object accepts mixture of different types of objects. A sample code for registering timer triggers can be found here.

    wait_set_.add_timer(much_slower_timer_);\n

    A trigger can be registered at declaration and initialization as described in wait_set_topics_and_timer.cpp from the examples.

    "},{"location":"contributing/coding-guidelines/ros-nodes/topic-message-handling/supp-wait_set/#3-verify-waitset-result","title":"3. Verify WaitSet result","text":"

    The data structure of the test result returned from the rclcpp::WaitSet is nested. You can find the WaitSet result by the following 2 steps;

    1. Verify if any trigger has been invoked
    2. Verify if a specified trigger has been triggered

    For step 1, here is a sample code taken from ros2_subscription_examples/waitset_examples/src/timer_listener_triple_async.cpp at main \u00b7 takam5f2/ros2_subscription_examples.

          auto wait_result = wait_set_.wait(std::chrono::milliseconds(0));\nif (wait_result.kind() == rclcpp::WaitResultKind::Ready) {\nRCLCPP_INFO(this->get_logger(), \"wait_set tells that some subscription is ready\");\n} else {\nRCLCPP_INFO(this->get_logger(), \"wait_set tells that any subscription is not ready and return\");\nreturn;\n}\n

    In the code above, auto wait_result = wait_set_.wait(std::chrono::milliseconds(0)) tests if a trigger in wait_set_ has been called. The argument to thewait()method is the timeout duration. If it is greater than 0 milliseconds or seconds, this method will wait for a message to be received until the timeout expires. Ifwait_result.kind() == rclcpp::WaitResultKind::Readyistrue, then any trigger has been invoked.

    For step 2, here is a sample code taken from ros2_subscription_examples/waitset_examples/src/timer_listener_triple_async.cpp at main \u00b7 takam5f2/ros2_subscription_examples.

          for (size_t i = 0; i < subscriptions_num; i++) {\nif (wait_result.get_wait_set().get_rcl_wait_set().subscriptions[i]) {\nstd_msgs::msg::String msg;\nrclcpp::MessageInfo msg_info;\nif (subscriptions_array_[i]->take(msg, msg_info)) {\nRCLCPP_INFO(this->get_logger(), \"Catch message via subscription[%ld]\", i);\nRCLCPP_INFO(this->get_logger(), \"I heard: [%s]\", msg.data.c_str());\n

    In the code above, wait_result.get_wait_set().get_rcl_wait_set().subscriptions[i] indicates whether each individual trigger has been invoked or not. The result is stored in the subscriptions array. The order in the subscriptions array is the same as the order in which the triggers are registered.

    "},{"location":"contributing/discussion-guidelines/","title":"Discussion guidelines","text":""},{"location":"contributing/discussion-guidelines/#discussion-guidelines","title":"Discussion guidelines","text":"

    Warning

    Under Construction

    Refer to the following links for now:

    • https://docs.github.com/en/discussions/guides/best-practices-for-community-conversations-on-github
    • https://opensource.guide/how-to-contribute/#communicating-effectively
    "},{"location":"contributing/documentation-guidelines/","title":"Documentation guidelines","text":""},{"location":"contributing/documentation-guidelines/#documentation-guidelines","title":"Documentation guidelines","text":""},{"location":"contributing/documentation-guidelines/#workflow","title":"Workflow","text":"

    Contributions to Autoware's documentation are welcome, and the same principles described in the contribution guidelines should be followed. Small, limited changes can be made by forking this repository and submitting a pull request, but larger changes should be discussed with the community and Autoware maintainers via GitHub Discussion first.

    Examples of small changes include:

    • Fixing spelling or grammatical mistakes
    • Fixing broken links
    • Making an addition to an existing, well-defined page, such as the Troubleshooting guide.

    Examples of larger changes include:

    • Adding new pages with a large amount of detail, such as a tutorial
    • Re-organization of the existing documentation structure
    "},{"location":"contributing/documentation-guidelines/#style-guide","title":"Style guide","text":"

    You should refer to the Google developer documentation style guide as much as possible. Reading the Highlights page of that guide is recommended, but if not then the key points below should be noted.

    • Use standard American English spelling and punctuation.
    • Use sentence case for document titles and section headings.
    • Use descriptive link text.
    • Write short sentences that are easy to understand and translate.
    "},{"location":"contributing/documentation-guidelines/#tips","title":"Tips","text":""},{"location":"contributing/documentation-guidelines/#how-to-preview-your-modification","title":"How to preview your modification","text":"

    There are two ways to preview your modification on a documentation website.

    "},{"location":"contributing/documentation-guidelines/#1-using-github-actions-workflow","title":"1. Using GitHub Actions workflow","text":"

    Follow the steps below.

    1. Create a pull request to the repository.
    2. Add the deploy-docs label from the sidebar (See below figure).
    3. Wait for a couple of minutes, and the github-actions bot will notify the URL for the pull request's preview.

    "},{"location":"contributing/documentation-guidelines/#2-running-an-mkdocs-server-in-your-local-environment","title":"2. Running an MkDocs server in your local environment","text":"

    Instead of creating a PR, you can use the mkdocs command to build Autoware's documentation websites on your local computer. Assuming that you are using Ubuntu OS, run the following to install the required libraries.

    python3 -m pip install -U $(curl -fsSL https://raw.githubusercontent.com/autowarefoundation/autoware-github-actions/main/deploy-docs/mkdocs-requirements.txt)\n

    Then, run mkdocs serve on your documentation directory.

    cd /PATH/TO/YOUR-autoware-documentation\nmkdocs serve\n

    It will launch the MkDocs server. Access http://127.0.0.1:8000/ to see the preview of the website.

    "},{"location":"contributing/pull-request-guidelines/","title":"Pull request guidelines","text":""},{"location":"contributing/pull-request-guidelines/#pull-request-guidelines","title":"Pull request guidelines","text":""},{"location":"contributing/pull-request-guidelines/#general-pull-request-workflow","title":"General pull request workflow","text":"

    Autoware uses the fork-and-pull model. For more details about the model, refer to GitHub Docs.

    The following is a general example of the pull request workflow based on the fork-and-pull model. Use this workflow as a reference when you contribute to Autoware.

    1. Create an issue.
      • Discuss the approaches to the issue with maintainers.
      • Confirm the support guidelines before creating an issue.
      • Follow the discussion guidelines when you discuss with other contributors.
    2. Create a fork repository. (for the first time only)
    3. Write code in your fork repository according to the approach agreed upon in the issue.
      • Write the tests and documentation as appropriate.
      • Follow the coding guidelines guidelines when you write code.
      • Follow the Testing guidelines guidelines when you write tests.
      • Follow the Documentation guidelines guidelines when you write documentation.
      • Follow the commit guidelines when you commit your changes.
    4. Test the code.
      • It is recommended that you summarize the test results, because you will need to explain the test results in the later review process.
      • If you are not sure what tests should be done, discuss them with maintainers.
    5. Create a pull request.
      • Follow the pull request rules when you create a pull request.
    6. Wait for the pull request to be reviewed.
      • The reviewers will review your code following the review guidelines.
        • Not only the reviewers, but also the author is encouraged to understand the review guidelines.
      • If CI checks have failed, fix the errors.
      • Learn about code ownership from the code owners guidelines.
        • Check the code owners FAQ section if your pull request is not being reviewed.
    7. Address the review comments pointed out by the reviewers.
      • If you don't understand the meaning of a review comment, ask the reviewers until you understand it.
        • Fixing without understanding the reason is not recommended because the author should be responsible for the final content of their own pull request.
      • If you don't agree with a review comment, ask the reviewers for a rational reason.
        • The reviewers are obligated to make the author understand the meanings of each comment.
      • After you have done with the review comments, re-request a review to the reviewers and back to 6.
        • Avoid using force push as much as possible so reviewers only see the differences. More precisely, at least keep a commit history up to the point of review because GitHub Web UI such as the suggested change may require rebase to pass DCO CI.
      • If there are no more new review comments, the reviewers will approve the pull request and proceed to 8.
    8. Merge the pull request.
      • Anyone with write access can merge the pull request if there is no special request from maintainers.
        • The author is encouraged to merge the pull request to feel responsible for their own pull request.
        • If the author does not have write access, ask the reviewers or maintainers.
    "},{"location":"contributing/pull-request-guidelines/#pull-request-rules","title":"Pull request rules","text":""},{"location":"contributing/pull-request-guidelines/#use-an-appropriate-pull-request-template-required-non-automated","title":"Use an appropriate pull request template (required, non-automated)","text":""},{"location":"contributing/pull-request-guidelines/#rationale","title":"Rationale","text":"
    • The unified style of descriptions by templates can make reviews efficient.
    "},{"location":"contributing/pull-request-guidelines/#example","title":"Example","text":"

    There are two types of templates. Select one based on the following condition.

    1. Standard change:
      • Complexity:
        • New features or significant updates.
        • Requires deeper understanding of the codebase.
      • Impact:
        • Affects multiple parts of the system.
        • Basically includes minor features, bug fixes and performance improvement.
        • Needs testing before merging.
    2. Small change:
      • Complexity:
        • Documentation, simple refactoring, or style adjustments.
        • Easy to understand and review.
      • Impact:
        • Minimal effect on the system.
        • Quicker merge with less testing needed.
    "},{"location":"contributing/pull-request-guidelines/#steps-to-use-an-appropriate-pull-request-template","title":"Steps to use an appropriate pull request template","text":"
    1. Select the appropriate template, as shown in this video.
    2. Read the selected template carefully and fill the required content.
    3. Check the checkboxes during a review.
      • There are pre-review checklist and post-review checklist for the author.
    "},{"location":"contributing/pull-request-guidelines/#set-appropriate-reviewers-after-creating-a-pull-request-required-partially-automated","title":"Set appropriate reviewers after creating a pull request (required, partially automated)","text":""},{"location":"contributing/pull-request-guidelines/#rationale_1","title":"Rationale","text":"
    • Pull requests must be reviewed by appropriate reviewers to keep the quality of the codebase.
    "},{"location":"contributing/pull-request-guidelines/#example_1","title":"Example","text":"
    • For most ROS packages, reviewers will be automatically assigned based on the maintainer information in package.xml.
    • If no reviewer is assigned automatically, assign reviewers manually following the instructions in GitHub Docs.
      • You can find the reviewers by seeing the .github/CODEOWNERS file of the repository.
    • If you are not sure the appropriate reviewers, ask @autoware-maintainers.
    • If you have no rights to assign reviewers, mention reviewers instead.
    "},{"location":"contributing/pull-request-guidelines/#apply-conventional-commits-to-the-pull-request-title-required-automated","title":"Apply Conventional Commits to the pull request title (required, automated)","text":""},{"location":"contributing/pull-request-guidelines/#rationale_2","title":"Rationale","text":"
    • Conventional Commits can generate categorized changelogs, for example using git-cliff.
    "},{"location":"contributing/pull-request-guidelines/#example_2","title":"Example","text":"
    feat(trajectory_follower): add an awesome feature\n

    Note

    You have to start the description part (here add an awesome feature) with a lowercase.

    If your change breaks some interfaces, use the ! (breaking changes) mark as follows:

    feat(trajectory_follower)!: remove package\nfeat(trajectory_follower)!: change parameter names\nfeat(planning)!: change topic names\nfeat(autoware_utils)!: change function names\n

    For the repositories that contain code (most repositories), use the definition of conventional-commit-types for the type.

    For documentation repositories such as autoware-documentation, use the following definition:

    • feat
      • Add new pages.
      • Add contents to the existing pages.
    • fix
      • Fix the contents in the existing pages.
    • refactor
      • Move contents to different pages.
    • docs
      • Update documentation for the documentation repository itself.
    • build
      • Update the settings of the documentation site builder.
    • ! (breaking changes)
      • Remove pages.
      • Change the URL of pages.

    perf and test are generally unused. Other types have the same meaning as the code repositories.

    "},{"location":"contributing/pull-request-guidelines/#add-the-related-component-names-to-the-scope-of-conventional-commits-advisory-non-automated","title":"Add the related component names to the scope of Conventional Commits (advisory, non-automated)","text":""},{"location":"contributing/pull-request-guidelines/#rationale_3","title":"Rationale","text":"
    • It helps contributors find pull requests that are relevant to them.
    • It makes the changelog clearer.
    "},{"location":"contributing/pull-request-guidelines/#example_3","title":"Example","text":"

    For ROS packages, adding the package name or component name is good.

    feat(trajectory_follower): add an awesome feature\nrefactor(planning, control): use common utils\n
    "},{"location":"contributing/pull-request-guidelines/#keep-a-pull-request-small-advisory-non-automated","title":"Keep a pull request small (advisory, non-automated)","text":""},{"location":"contributing/pull-request-guidelines/#rationale_4","title":"Rationale","text":"
    • Small pull requests are easy to understand for reviewers.
    • Small pull requests are easy to revert for maintainers.
    "},{"location":"contributing/pull-request-guidelines/#exception","title":"Exception","text":"

    It is acceptable if it is agreed with maintainers that there is no other way but to submit a big pull request.

    "},{"location":"contributing/pull-request-guidelines/#example_4","title":"Example","text":"
    • Avoid developing two features in one pull request.
    • Avoid mixing different types (feat, fix, refactor, etc.) of changes in the same commit.
    "},{"location":"contributing/pull-request-guidelines/#remind-reviewers-if-there-is-no-response-for-more-than-a-week-advisory-non-automated","title":"Remind reviewers if there is no response for more than a week (advisory, non-automated)","text":""},{"location":"contributing/pull-request-guidelines/#rationale_5","title":"Rationale","text":"
    • It is the author's responsibility to care about their own pull request until it is merged.
    "},{"location":"contributing/pull-request-guidelines/#example_5","title":"Example","text":"
    @{some-of-developers} Would it be possible for you to review this PR?\n@autoware-maintainers friendly ping.\n
    "},{"location":"contributing/pull-request-guidelines/ai-pr-review/","title":"AI PR Review","text":""},{"location":"contributing/pull-request-guidelines/ai-pr-review/#ai-pr-review","title":"AI PR Review","text":"

    We have Codium-ai/pr-agent enabled for Autoware Universe repository.

    "},{"location":"contributing/pull-request-guidelines/ai-pr-review/#the-workflow","title":"The workflow","text":"

    Workflow: pr-agent.yaml

    "},{"location":"contributing/pull-request-guidelines/ai-pr-review/#additional-links-for-the-workflow-maintainers","title":"Additional links for the workflow maintainers","text":"
    • Available models list
    "},{"location":"contributing/pull-request-guidelines/ai-pr-review/#how-to-use","title":"How to use","text":"

    When you create the PR, or within the PR add the label tag:pr-agent.

    Wait until both PR-Agent jobs are completed successfully:

    • prevent-no-label-execution-pr-agent / prevent-no-label-execution
    • Run pr agent on every pull request, respond to user comments

    Warning

    If you add multiple labels at the same time, prevent-no-label-execution can get confused.

    For example, first add tag:pr-agent, wait until it is ready, then add tag:run-build-and-test-differential if you need it.

    Then you can pick one of the following commands:

    /review: Request a review of your Pull Request.\n/describe: Update the PR description based on the contents of the PR.\n/improve: Suggest code improvements.\n/ask Could you propose a better name for this parameter?: Ask a question about the PR or anything really.\n/update_changelog: Update the changelog based on the PR's contents.\n/add_docs: Generate docstring for new components introduced in the PR.\n/help: Get a list of all available PR-Agent tools and their descriptions.\n
    • Here is the official documentation.
    • Usage Guide

    To use it, drop a comment post within your PR like this.

    Within a minute, you should see \ud83d\udc40 reaction under your comment post.

    Then the bot will drop a response with reviews, description or an answer.

    Info

    Please drop a single PR-Agent related comment at a time.

    "},{"location":"contributing/pull-request-guidelines/ci-checks/","title":"CI checks","text":""},{"location":"contributing/pull-request-guidelines/ci-checks/#ci-checks","title":"CI checks","text":"

    Autoware has several checks for a pull request. The results are shown at the bottom of the pull request page as below.

    If the \u274c mark is shown, click the Details button and investigate the failure reason.

    If the Required mark is shown, you cannot merge the pull request unless you resolve the error. If not, it is optional, but preferably it should be fixed.

    The following sections explain about common CI checks in Autoware. Note that some repositories may have different settings.

    "},{"location":"contributing/pull-request-guidelines/ci-checks/#dco","title":"DCO","text":"

    The Developer Certificate of Origin (DCO) is a lightweight way for contributors to certify that they wrote or otherwise have the right to submit the code they are contributing to the project.

    This workflow checks whether the pull request fulfills DCO. You need to confirm the required items and commit with git commit -s.

    For more information, refer to the GitHub App page.

    "},{"location":"contributing/pull-request-guidelines/ci-checks/#semantic-pull-request","title":"semantic-pull-request","text":"

    This workflow checks whether the pull request follows Conventional Commits.

    For the detailed rules, see the pull request rules.

    "},{"location":"contributing/pull-request-guidelines/ci-checks/#pre-commit","title":"pre-commit","text":"

    pre-commit is a tool to run formatters or linters when you commit.

    This workflow checks whether the pull request has no error with pre-commit.

    In the workflow pre-commit.ci - pr is enabled in the repository, it will automatically fix errors by pre-commit.ci as many as possible. If there are some errors remain, fix them manually.

    You can run pre-commit in your local environment by the following command:

    pre-commit run -a\n

    Or you can install pre-commit to the repository and automatically run it before committing:

    pre-commit install\n

    Since it is difficult to detect errors with no false positives, some jobs are split into another config file and marked as optional. To check them, use the --config option:

    pre-commit run -a --config .pre-commit-config-optional.yaml\n
    "},{"location":"contributing/pull-request-guidelines/ci-checks/#spell-check-differential","title":"spell-check-differential","text":"

    This workflow detects spelling mistakes using CSpell with our dictionary file. Since it is difficult to detect errors with no false positives, it is an optional workflow, but it is preferable to remove spelling mistakes as many as possible.

    You have the following options if you need to use a word that is not registered in the dictionary.

    • If the word is only used in a few files, you can use inline document settings \"cspell:ignore\" to suppress the check.
    • If the word is widely used in the repository, you can create a local cspell json and pass it to the spell-check action.
    • If the word is common and may be used in many repositories, you can submit pull requests to tier4/autoware-spell-check-dict or tier4/cspell-dicts to update the dictionary.
    "},{"location":"contributing/pull-request-guidelines/ci-checks/#build-and-test-differential","title":"build-and-test-differential","text":"

    This workflow checks colcon build and colcon test for the pull request. To make the CI faster, it doesn't check all packages but only modified packages and the dependencies.

    "},{"location":"contributing/pull-request-guidelines/ci-checks/#build-and-test-differential-self-hosted","title":"build-and-test-differential-self-hosted","text":"

    This workflow is the ARM64 version of build-and-test-differential. You need to add the ARM64 label to run this workflow.

    For reference information, since ARM machines are not supported by GitHub-hosted runners, we use self-hosted runners prepared by the AWF. For the details about self-hosted runners, refer to GitHub Docs.

    "},{"location":"contributing/pull-request-guidelines/ci-checks/#deploy-docs","title":"deploy-docs","text":"

    This workflow deploys the preview documentation site for the pull request. You need to add the deploy-docs label to run this workflow.

    "},{"location":"contributing/pull-request-guidelines/code-owners/","title":"Code owners","text":""},{"location":"contributing/pull-request-guidelines/code-owners/#code-owners","title":"Code owners","text":"

    The Autoware project uses multiple CODEOWNERS files to specify owners throughout the repository. For a detailed understanding of code owners, visit the GitHub documentation on code owners.

    "},{"location":"contributing/pull-request-guidelines/code-owners/#purpose-and-function-of-the-codeowners-file","title":"Purpose and function of the CODEOWNERS file","text":"

    The CODEOWNERS file plays a vital role in managing pull requests (PRs) by:

    • Automating Review Requests: Automatically assigns PRs to responsible individuals or teams.
    • Enforcing Merge Approval: Prevents PR merging without approvals from designated code owners or repository maintainers, ensuring thorough review.
    • Maintaining Quality Control: Helps sustain code quality and consistency by requiring review from knowledgeable individuals or teams.
    "},{"location":"contributing/pull-request-guidelines/code-owners/#locating-codeowners-files","title":"Locating CODEOWNERS files","text":"

    CODEOWNERS files are found in the .github directory across multiple repositories of the Autoware project. The autoware.repos file lists these repositories and their directories.

    "},{"location":"contributing/pull-request-guidelines/code-owners/#maintenance-of-codeowners","title":"Maintenance of CODEOWNERS","text":"

    Generally, repository maintainers handle the updates to CODEOWNERS files. To propose changes, submit a PR to modify the file.

    "},{"location":"contributing/pull-request-guidelines/code-owners/#special-case-for-the-autoware-universe-repository","title":"Special case for the Autoware Universe repository","text":"

    In the autoware.universe repository, maintenance of the CODEOWNERS file is automated by the CI.

    This workflow updates the CODEOWNERS file based on the maintainer information in the package.xml files of the packages in the repository.

    In order to change the code owners for a package in the autoware.universe repository:

    1. Modify the maintainer information in the package.xml file via a PR.
    2. Once merged, the CI workflow runs at midnight UTC (or can be triggered manually by a maintainer) to update the CODEOWNERS file and create a PR.
    3. A maintainer then needs to merge the CI-generated PR to finalize the update.
      • Example Automated PR: chore: update CODEOWNERS #6866
    "},{"location":"contributing/pull-request-guidelines/code-owners/#responsibilities-of-code-owners","title":"Responsibilities of code owners","text":"

    Code owners should review assigned PRs promptly. If a PR remains unreviewed for over a week, maintainers may intervene to review and possibly merge it.

    "},{"location":"contributing/pull-request-guidelines/code-owners/#faq","title":"FAQ","text":""},{"location":"contributing/pull-request-guidelines/code-owners/#unreviewed-pull-requests","title":"Unreviewed pull requests","text":"

    If your PR hasn't been reviewed:

    • \ud83c\udff9 Directly Address Code Owners: Comment on the PR to alert the owners.
    • \u23f3 Follow Up After a Week: If unreviewed after a week, add a comment under the PR and tag the @autoware-maintainers.
    • \ud83d\udce2 Escalate if Necessary: If your requests continue to go unanswered, you may escalate the issue by posting a message in the Autoware Discord channel \ud83d\udea8. Remember, maintainers often juggle numerous responsibilities, so patience is appreciated\ud83d\ude47.
    "},{"location":"contributing/pull-request-guidelines/code-owners/#pr-author-is-the-only-code-owner","title":"PR author is the only code owner","text":"

    If you, as the only code owner, authored a PR:

    • Request a review by tagging @autoware-maintainers.
    • The maintainers will consider appointing additional maintainers to avoid such conflicts.
    "},{"location":"contributing/pull-request-guidelines/code-owners/#non-code-owners-reviewing-prs","title":"Non-code owners reviewing PRs","text":"

    Anyone can review a PR:

    • You can review any pull request and provide your feedback.
    • Your review might not be enough to merge the pull request, but it will help the code owners and maintainers to make a decision.
    • If you think the pull request is ready to merge, you can mention the code owners and maintainers in a comment on the pull request.
    "},{"location":"contributing/pull-request-guidelines/commit-guidelines/","title":"Commit guidelines","text":""},{"location":"contributing/pull-request-guidelines/commit-guidelines/#commit-guidelines","title":"Commit guidelines","text":""},{"location":"contributing/pull-request-guidelines/commit-guidelines/#branch-rules","title":"Branch rules","text":""},{"location":"contributing/pull-request-guidelines/commit-guidelines/#start-branch-names-with-the-corresponding-issue-numbers-advisory-non-automated","title":"Start branch names with the corresponding issue numbers (advisory, non-automated)","text":""},{"location":"contributing/pull-request-guidelines/commit-guidelines/#rationale","title":"Rationale","text":"
    • Developers can quickly find the corresponding issues.
    • It is helpful for tools.
    • It is consistent with GitHub's default behavior.
    "},{"location":"contributing/pull-request-guidelines/commit-guidelines/#exception","title":"Exception","text":"

    If there are no corresponding issues, you can ignore this rule.

    "},{"location":"contributing/pull-request-guidelines/commit-guidelines/#example","title":"Example","text":"
    123-add-feature\n
    "},{"location":"contributing/pull-request-guidelines/commit-guidelines/#reference","title":"Reference","text":"
    • GitHub Docs
    "},{"location":"contributing/pull-request-guidelines/commit-guidelines/#use-dash-case-for-the-separator-of-branch-names-advisory-non-automated","title":"Use dash-case for the separator of branch names (advisory, non-automated)","text":""},{"location":"contributing/pull-request-guidelines/commit-guidelines/#rationale_1","title":"Rationale","text":"
    • It is consistent with GitHub's default behavior.
    "},{"location":"contributing/pull-request-guidelines/commit-guidelines/#example_1","title":"Example","text":"
    123-add-feature\n
    "},{"location":"contributing/pull-request-guidelines/commit-guidelines/#reference_1","title":"Reference","text":"
    • GitHub Docs
    "},{"location":"contributing/pull-request-guidelines/commit-guidelines/#make-branch-names-descriptive-advisory-non-automated","title":"Make branch names descriptive (advisory, non-automated)","text":""},{"location":"contributing/pull-request-guidelines/commit-guidelines/#rationale_2","title":"Rationale","text":"
    • It can avoid conflicts of names.
    • Developers can understand the purpose of the branch.
    "},{"location":"contributing/pull-request-guidelines/commit-guidelines/#exception_1","title":"Exception","text":"

    If you have already submitted a pull request, you do not have to change the branch name because you need to re-create a pull request, which is noisy and a waste of time. Be careful from the next time.

    "},{"location":"contributing/pull-request-guidelines/commit-guidelines/#example_2","title":"Example","text":"

    Usually it is good to start with a verb.

    123-fix-memory-leak-of-trajectory-follower\n
    "},{"location":"contributing/pull-request-guidelines/commit-guidelines/#commit-rules","title":"Commit rules","text":""},{"location":"contributing/pull-request-guidelines/commit-guidelines/#sign-off-your-commits-required-automated","title":"Sign-off your commits (required, automated)","text":"

    Developers must certify that they wrote or otherwise have the right to submit the code they are contributing to the project.

    "},{"location":"contributing/pull-request-guidelines/commit-guidelines/#rationale_3","title":"Rationale","text":"

    If not, it will lead to complex license problems.

    "},{"location":"contributing/pull-request-guidelines/commit-guidelines/#example_3","title":"Example","text":"
    git commit -s\n
    feat: add a feature\n\nSigned-off-by: Autoware <autoware@example.com>\n
    "},{"location":"contributing/pull-request-guidelines/commit-guidelines/#reference_2","title":"Reference","text":"
    • GitHub Apps - DCO
    "},{"location":"contributing/pull-request-guidelines/review-guidelines/","title":"Review guidelines","text":""},{"location":"contributing/pull-request-guidelines/review-guidelines/#review-guidelines","title":"Review guidelines","text":"

    Warning

    Under Construction

    Refer to the following links for now:

    • https://google.github.io/eng-practices/review/
    • https://docs.gitlab.com/ee/development/code_review.html
    • https://www.swarmia.com/blog/a-complete-guide-to-code-reviews/
    • https://rewind.com/blog/best-practices-for-reviewing-pull-requests-in-github/
    "},{"location":"contributing/pull-request-guidelines/review-tips/","title":"Review tips","text":""},{"location":"contributing/pull-request-guidelines/review-tips/#review-tips","title":"Review tips","text":""},{"location":"contributing/pull-request-guidelines/review-tips/#toggle-annotations-or-review-comments-in-the-diff-view","title":"Toggle annotations or review comments in the diff view","text":"

    There might be some annotations or review comments in the diff view during your review.

    To toggle annotations, press the A key.

    Before:

    After:

    To toggle review comments, press the I key.

    For other keyboard shortcuts, refer to GitHub Docs.

    "},{"location":"contributing/pull-request-guidelines/review-tips/#view-code-in-the-web-based-visual-studio-code","title":"View code in the web-based Visual Studio Code","text":"

    You can open Visual Studio Code from your browser to view code in a rich UI. To use it, press the . key on any repository or pull request.

    For more detailed usage, refer to github/dev.

    "},{"location":"contributing/pull-request-guidelines/review-tips/#check-out-the-branch-of-a-pull-request-quickly","title":"Check out the branch of a pull request quickly","text":"

    If you want to check out the branch of a pull request, it's generally troublesome with the fork-and-pull model.

    # Copy the user name and the fork URL.\ngit remote add {user-name} {fork-url}\ngit checkout {user-name}/{branch-name}\ngit remote rm {user-name} # To clean up\n

    Instead, you can use GitHub CLI to simplify the steps, just run gh pr checkout {pr-number}.

    You can copy the command from the top right of the pull request page.

    "},{"location":"contributing/testing-guidelines/","title":"Testing guidelines","text":""},{"location":"contributing/testing-guidelines/#testing-guidelines","title":"Testing guidelines","text":""},{"location":"contributing/testing-guidelines/#unit-testing","title":"Unit testing","text":"

    Unit testing is a software testing method that tests individual units of source code to determine whether they satisfy the specification.

    For details, see the Unit testing guidelines.

    "},{"location":"contributing/testing-guidelines/#integration-testing","title":"Integration testing","text":"

    Integration testing combines and tests the individual software modules as a group, and is done after unit testing.

    While performing integration testing, the following subtypes of tests are written:

    1. Fault injection testing
    2. Back-to-back comparison between a model and code
    3. Requirements-based testing
    4. Anomaly detection during integration testing
    5. Random input testing

    For details, see the Integration testing guidelines.

    "},{"location":"contributing/testing-guidelines/integration-testing/","title":"Integration testing","text":""},{"location":"contributing/testing-guidelines/integration-testing/#integration-testing","title":"Integration testing","text":"

    An integration test is defined as the phase in software testing where individual software modules are combined and tested as a group. Integration tests occur after unit tests, and before validation tests.

    The input to an integration test is a set of independent modules that have been unit tested. The set of modules is tested against the defined integration test plan, and the output is a set of properly integrated software modules that is ready for system testing.

    "},{"location":"contributing/testing-guidelines/integration-testing/#value-of-integration-testing","title":"Value of integration testing","text":"

    Integration tests determine if independently developed software modules work correctly when the modules are connected to each other. In ROS 2, the software modules are called nodes. Testing a single node is a special type of integration test that is commonly referred to as component testing.

    Integration tests help to find the following types of errors:

    • Incompatible interactions between nodes, such as non-matching topics, different message types, or incompatible QoS settings.
    • Edge cases that were not touched by unit testing, such as a critical timing issue, network communication delays, disk I/O failures, and other such problems that can occur in production environments.
    • Issues that can occur while the system is under high CPU/memory load, such as malloc failures. This can be tested using tools like stress and udpreplay to test the performance of nodes with real data.

    With ROS 2, it is possible to program complex autonomous-driving applications with a large number of nodes. Therefore, a lot of effort has been made to provide an integration-test framework that helps developers test the interaction of ROS 2 nodes.

    "},{"location":"contributing/testing-guidelines/integration-testing/#integration-test-framework","title":"Integration-test framework","text":"

    A typical integration-test framework has three parts:

    1. A series of executables with arguments that work together and generate outputs.
    2. A series of expected outputs that should match the output of the executables.
    3. A launcher that starts the tests, compares the outputs to the expected outputs, and determines if the test passes.

    In Autoware, we use the launch_testing framework.

    "},{"location":"contributing/testing-guidelines/integration-testing/#smoke-tests","title":"Smoke tests","text":"

    Autoware has a dedicated API for smoke testing. To use this framework, in package.xml add:

    <test_depend>autoware_testing</test_depend>\n

    And in CMakeLists.txt add:

    if(BUILD_TESTING)\nfind_package(autoware_testing REQUIRED)\nadd_smoke_test(${PROJECT_NAME} ${NODE_NAME})\nendif()\n

    Doing so adds smoke tests that ensure that a node can be:

    1. Launched with a default parameter file.
    2. Terminated with a standard SIGTERM signal.

    For the full API documentation, refer to the package design page.

    Note

    This API is not suitable for all smoke test cases. It cannot be used when a specific file location (eg: for a map) is required to be passed to the node, or if some preparation needs to be conducted before node launch. In such cases use the manual solution from the component test section below.

    "},{"location":"contributing/testing-guidelines/integration-testing/#integration-test-with-a-single-node-component-test","title":"Integration test with a single node: component test","text":"

    The simplest scenario is a single node. In this case, the integration test is commonly referred to as a component test.

    To add a component test to an existing node, you can follow the example of the lanelet2_map_loader in the autoware_map_loader package (added in this PR).

    In package.xml, add:

    <test_depend>ros_testing</test_depend>\n

    In CMakeLists.txt, add or modify the BUILD_TESTING section:

    if(BUILD_TESTING)\nadd_ros_test(\ntest/lanelet2_map_loader_launch.test.py\nTIMEOUT \"30\"\n)\ninstall(DIRECTORY\ntest/data/\nDESTINATION share/${PROJECT_NAME}/test/data/\n)\nendif()\n

    In addition to the command add_ros_test, we also install any data that is required by the test using the install command.

    Note

    • The TIMEOUT argument is given in seconds; see the add_ros_test.cmake file for details.
    • The add_ros_test command will run the test in a unique ROS_DOMAIN_ID which avoids interference between tests running in parallel.

    To create a test, either read the launch_testing quick-start example, or follow the steps below.

    Taking test/lanelet2_map_loader_launch.test.py as an example, first dependencies are imported:

    import os\nimport unittest\n\nfrom ament_index_python import get_package_share_directory\nimport launch\nfrom launch import LaunchDescription\nfrom launch_ros.actions import Node\nimport launch_testing\nimport pytest\n

    Then a launch description is created to launch the node under test. Note that the test_map.osm file path is found and passed to the node, something that cannot be done with the smoke testing API:

    @pytest.mark.launch_test\ndef generate_test_description():\n\n    lanelet2_map_path = os.path.join(\n        get_package_share_directory(\"autoware_map_loader\"), \"test/data/test_map.osm\"\n    )\n\n    lanelet2_map_loader = Node(\n        package=\"autoware_map_loader\",\n        executable=\"autoware_lanelet2_map_loader\",\n        parameters=[{\"lanelet2_map_path\": lanelet2_map_path}],\n    )\n\n    context = {}\n\n    return (\n        LaunchDescription(\n            [\n                lanelet2_map_loader,\n                # Start test after 1s - gives time for the map_loader to finish initialization\n                launch.actions.TimerAction(\n                    period=1.0, actions=[launch_testing.actions.ReadyToTest()]\n                ),\n            ]\n        ),\n        context,\n    )\n

    Note

    • Since the node need time to process the input lanelet2 map, we use a TimerAction to delay the start of the test by 1s.
    • In the example above, the context is empty but it can be used to pass objects to the test cases.
    • You can find an example of using the context in the ROS 2 context_launch_test.py test example.

    Finally, a test is executed after the node executable has been shut down (post_shutdown_test). Here we ensure that the node was launched without error and exited cleanly.

    @launch_testing.post_shutdown_test()\nclass TestProcessOutput(unittest.TestCase):\n    def test_exit_code(self, proc_info):\n        # Check that process exits with code 0: no error\n        launch_testing.asserts.assertExitCodes(proc_info)\n
    "},{"location":"contributing/testing-guidelines/integration-testing/#running-the-test","title":"Running the test","text":"

    Continuing the example from above, first build your package:

    colcon build --packages-up-to autoware_map_loader\nsource install/setup.bash\n

    Then either execute the component test manually:

    ros2 test src/universe/autoware.universe/map/autoware_map_loader/test/lanelet2_map_loader_launch.test.py\n

    Or as part of testing the entire package:

    colcon test --packages-select autoware_map_loader\n

    Verify that the test is executed; e.g.

    $ colcon test-result --all --verbose\n...\nbuild/autoware_map_loader/test_results/autoware_map_loader/test_lanelet2_map_loader_launch.test.py.xunit.xml: 1 test, 0 errors, 0 failures, 0 skipped\n
    "},{"location":"contributing/testing-guidelines/integration-testing/#next-steps","title":"Next steps","text":"

    The simple test described in Integration test with a single node: component test can be extended in numerous directions, such as testing a node's output.

    "},{"location":"contributing/testing-guidelines/integration-testing/#testing-the-output-of-a-node","title":"Testing the output of a node","text":"

    To test while the node is running, create an active test by adding a subclass of Python's unittest.TestCase to *launch.test.py. Some boilerplate code is required to access output by creating a node and a subscription to a particular topic, e.g.

    import unittest\n\nclass TestRunningDataPublisher(unittest.TestCase):\n\n    @classmethod\n    def setUpClass(cls):\n        cls.context = Context()\n        rclpy.init(context=cls.context)\n        cls.node = rclpy.create_node(\"test_node\", context=cls.context)\n\n    @classmethod\n    def tearDownClass(cls):\n        rclpy.shutdown(context=cls.context)\n\n    def setUp(self):\n        self.msgs = []\n        sub = self.node.create_subscription(\n            msg_type=my_msg_type,\n            topic=\"/info_test\",\n            callback=self._msg_received\n        )\n        self.addCleanup(self.node.destroy_subscription, sub)\n\n    def _msg_received(self, msg):\n        # Callback for ROS 2 subscriber used in the test\n        self.msgs.append(msg)\n\n    def get_message(self):\n        startlen = len(self.msgs)\n\n        executor = rclpy.executors.SingleThreadedExecutor(context=self.context)\n        executor.add_node(self.node)\n\n        try:\n            # Try up to 60 s to receive messages\n            end_time = time.time() + 60.0\n            while time.time() < end_time:\n                executor.spin_once(timeout_sec=0.1)\n                if startlen != len(self.msgs):\n                    break\n\n            self.assertNotEqual(startlen, len(self.msgs))\n            return self.msgs[-1]\n        finally:\n            executor.remove_node(self.node)\n\n    def test_message_content():\n        msg = self.get_message()\n        self.assertEqual(msg, \"Hello, world\")\n
    "},{"location":"contributing/testing-guidelines/integration-testing/#references","title":"References","text":"
    • colcon is used to build and run tests.
    • launch testing launches nodes and runs tests.
    • Testing guidelines describes the different types of tests performed in Autoware and links to the corresponding guidelines.
    "},{"location":"contributing/testing-guidelines/unit-testing/","title":"Unit testing","text":""},{"location":"contributing/testing-guidelines/unit-testing/#unit-testing","title":"Unit testing","text":"

    Unit testing is the first phase of testing and is used to validate units of source code such as classes and functions. Typically, a unit of code is tested by validating its output for various inputs. Unit testing helps ensure that the code behaves as intended and prevents accidental changes of behavior.

    Autoware uses the ament_cmake framework to build and run tests. The same framework is also used to analyze the test results.

    ament_cmake provides several convenience functions to make it easy to register tests in a CMake-based package and to ensure that JUnit-compatible result files are generated. It currently supports a few different testing frameworks like pytest, gtest, and gmock.

    In order to prevent tests running in parallel from interfering with each other when publishing and subscribing to ROS topics, it is recommended to use commands from ament_cmake_ros to run tests in isolation.

    See below for an example of using ament_add_ros_isolated_gtest with colcon test. All other tests follow a similar pattern.

    "},{"location":"contributing/testing-guidelines/unit-testing/#create-a-unit-test-with-gtest","title":"Create a unit test with gtest","text":"

    In my_cool_pkg/test, create the gtest code file test_my_cool_pkg.cpp:

    #include \"gtest/gtest.h\"\n#include \"my_cool_pkg/my_cool_pkg.hpp\"\nTEST(TestMyCoolPkg, TestHello) {\nEXPECT_EQ(my_cool_pkg::print_hello(), 0);\n}\n

    In package.xml, add the following line:

    <test_depend>ament_cmake_ros</test_depend>\n

    Next add an entry under BUILD_TESTING in the CMakeLists.txt to compile the test source files:

    if(BUILD_TESTING)\n\nament_add_ros_isolated_gtest(test_my_cool_pkg test/test_my_cool_pkg.cpp)\ntarget_link_libraries(test_my_cool_pkg ${PROJECT_NAME})\ntarget_include_directories(test_my_cool_pkg PRIVATE src)  # For private headers.\n...\nendif()\n

    This automatically links the test with the default main function provided by gtest. The code under test is usually in a different CMake target (${PROJECT_NAME} in the example) and its shared object for linking needs to be added. If the test source files include private headers from the src directory, the directory needs to be added to the include path using target_include_directories() function.

    To register a new gtest item, wrap the test code with the macro TEST (). TEST () is a predefined macro that helps generate the final test code, and also registers a gtest item to be available for execution. The test case name should be in CamelCase, since gtest inserts an underscore between the fixture name and the class case name when creating the test executable.

    gtest/gtest.h also contains predefined macros of gtest like ASSERT_TRUE(condition), ASSERT_FALSE(condition), ASSERT_EQ(val1,val2), ASSERT_STREQ(str1,str2), EXPECT_EQ(), etc. ASSERT_* will abort the test if the condition is not satisfied, while EXPECT_* will mark the test as failed but continue on to the next test condition.

    Info

    More information about gtest and its features can be found in the gtest repo.

    In the demo CMakeLists.txt, ament_add_ros_isolated_gtest is a predefined macro in ament_cmake_ros that helps simplify adding gtest code. Details can be viewed in ament_add_gtest.cmake.

    "},{"location":"contributing/testing-guidelines/unit-testing/#build-test","title":"Build test","text":"

    By default, all necessary test files (ELF, CTestTestfile.cmake, etc.) are compiled by colcon:

    cd ~/workspace/\ncolcon build --packages-select my_cool_pkg\n

    Test files are generated under ~/workspace/build/my_cool_pkg.

    "},{"location":"contributing/testing-guidelines/unit-testing/#run-test","title":"Run test","text":"

    To run all tests for a specific package, call:

    $ colcon test --packages-select my_cool_pkg\n\nStarting >>> my_cool_pkg\nFinished <<< my_cool_pkg [7.80s]\n\nSummary: 1 package finished [9.27s]\n

    The test command output contains a brief report of all the test results.

    To get job-wise information of all executed tests, call:

    $ colcon test-result --all\n\nbuild/my_cool_pkg/test_results/my_cool_pkg/copyright.xunit.xml: 8 tests, 0 errors, 0 failures, 0 skipped\nbuild/my_cool_pkg/test_results/my_cool_pkg/cppcheck.xunit.xml: 6 tests, 0 errors, 0 failures, 0 skipped\nbuild/my_cool_pkg/test_results/my_cool_pkg/lint_cmake.xunit.xml: 1 test, 0 errors, 0 failures, 0 skipped\nbuild/my_cool_pkg/test_results/my_cool_pkg/my_cool_pkg_exe_integration_test.xunit.xml: 1 test, 0 errors, 0 failures, 0 skipped\nbuild/my_cool_pkg/test_results/my_cool_pkg/test_my_cool_pkg.gtest.xml: 1 test, 0 errors, 0 failures, 0 skipped\nbuild/my_cool_pkg/test_results/my_cool_pkg/xmllint.xunit.xml: 1 test, 0 errors, 0 failures, 0 skipped\n\nSummary: 18 tests, 0 errors, 0 failures, 0 skipped\n

    Look in the ~/workspace/log/test_<date>/<package_name> directory for all the raw test commands, std_out, and std_err. There is also the ~/workspace/log/latest_*/ directory containing symbolic links to the most recent package-level build and test output.

    To print the tests' details while the tests are being run, use the --event-handlers console_cohesion+ option to print the details directly to the console:

    $ colcon test --event-handlers console_cohesion+ --packages-select my_cool_pkg\n\n...\ntest 1\n    Start 1: test_my_cool_pkg\n\n1: Test command: /usr/bin/python3 \"-u\" \"~/workspace/install/share/ament_cmake_test/cmake/run_test.py\" \"~/workspace/build/my_cool_pkg/test_results/my_cool_pkg/test_my_cool_pkg.gtest.xml\" \"--package-name\" \"my_cool_pkg\" \"--output-file\" \"~/workspace/build/my_cool_pkg/ament_cmake_gtest/test_my_cool_pkg.txt\" \"--command\" \"~/workspace/build/my_cool_pkg/test_my_cool_pkg\" \"--gtest_output=xml:~/workspace/build/my_cool_pkg/test_results/my_cool_pkg/test_my_cool_pkg.gtest.xml\"\n1: Test timeout computed to be: 60\n1: -- run_test.py: invoking following command in '~/workspace/src/my_cool_pkg':\n1:  - ~/workspace/build/my_cool_pkg/test_my_cool_pkg --gtest_output=xml:~/workspace/build/my_cool_pkg/test_results/my_cool_pkg/test_my_cool_pkg.gtest.xml\n1: [==========] Running 1 test from 1 test case.\n1: [----------] Global test environment set-up.\n1: [----------] 1 test from test_my_cool_pkg\n1: [ RUN      ] test_my_cool_pkg.test_hello\n1: Hello World\n1: [       OK ] test_my_cool_pkg.test_hello (0 ms)\n1: [----------] 1 test from test_my_cool_pkg (0 ms total)\n1:\n1: [----------] Global test environment tear-down\n1: [==========] 1 test from 1 test case ran. (0 ms total)\n1: [  PASSED  ] 1 test.\n1: -- run_test.py: return code 0\n1: -- run_test.py: inject classname prefix into gtest result file '~/workspace/build/my_cool_pkg/test_results/my_cool_pkg/test_my_cool_pkg.gtest.xml'\n1: -- run_test.py: verify result file '~/workspace/build/my_cool_pkg/test_results/my_cool_pkg/test_my_cool_pkg.gtest.xml'\n1/5 Test #1: test_my_cool_pkg ...................   Passed    0.09 sec\n\n...\n\n100% tests passed, 0 tests failed out of 5\n\nLabel Time Summary:\ncopyright     =   0.49 sec*proc (1 test)\ncppcheck      =   0.20 sec*proc (1 test)\ngtest         =   0.05 sec*proc (1 test)\nlint_cmake    =   0.18 sec*proc (1 test)\nlinter        =   1.34 sec*proc (4 tests)\nxmllint       =   0.47 sec*proc (1 test)\n\nTotal Test time (real) =   7.91 sec\n...\n
    "},{"location":"contributing/testing-guidelines/unit-testing/#code-coverage","title":"Code coverage","text":"

    Loosely described, a code coverage metric is a measure of how much of the program code has been exercised (covered) during testing.

    In the Autoware repositories, Codecov is used to automatically calculate coverage of any open pull request.

    More details about the code coverage metrics can be found in the Codecov documentation.

    "},{"location":"datasets/","title":"Datasets","text":""},{"location":"datasets/#datasets","title":"Datasets","text":"

    Autoware partners provide datasets for testing and development. These datasets are available for download here.

    "},{"location":"datasets/#istanbul-open-dataset","title":"Istanbul Open Dataset","text":"

    The dataset is collected in the following route. Tunnels and bridges are annotated on the image. The included specific areas into the dataset are:

    • Galata Bridge (Small Bridge)
    • Eurasia Tunnel (Long Tunnel with High Elevation Changes)
    • 2nd Bosphorus Bridge (Long Bridge)
    • Kagithane-Bomonti Tunnel (Small Tunnel)
    • Viaducts, road junctions, highways, dense urban areas...

    "},{"location":"datasets/#leo-drive-mapping-kit-sensor-data","title":"Leo Drive - Mapping Kit Sensor Data","text":"

    This dataset contains data from the portable mapping kit used for general mapping purposes.

    The data contains data from the following sensors:

    • 1 x Applanix POS LVX GNSS/INS System
    • 1 x Hesai Pandar XT32 LiDAR

    For sensor calibrations, /tf_static topic is added.

    "},{"location":"datasets/#data-links","title":"Data Links","text":"
    • You can find the produced full point cloud map, corner feature point cloud map and surface feature point cloud map here:

      • https://drive.google.com/drive/folders/1_jiQod4lO6-V2NDEr3d-M3XF_Nqmc0Xf?usp=drive_link
      • Exported point clouds are exported via downsampling with 0.2 meters and 0.5 meters voxel grids.
    • You can find the ROS 2 bag which is collected simultaneously with the mapping data:

      • https://drive.google.com/drive/folders/17zXiBeYlM90gQ5hV6EAWaoBTnNFoVPML?usp=drive_link
      • Due to the simultaneous data collection, we can assume that the point cloud maps and GNSS/INS data are the ground truth data for this rosbag.
    • Additionally, you can find the raw data used for mapping at the below link:
      • https://drive.google.com/drive/folders/1HmWYkxF5XvVCR27R8W7ZqO7An4HlJ6lD?usp=drive_link
      • Point clouds are collected as PCAP and feature-matched GNSS/INS data exported to a txt file.
    "},{"location":"datasets/#localization-performance-evaluation-with-autoware","title":"Localization Performance Evaluation with Autoware","text":"

    The report of the performance evaluation of the current Autoware with the collected data can be found in the link below.

    The report documented at 2024-08-28.

    • https://github.com/orgs/autowarefoundation/discussions/5135
    "},{"location":"datasets/#topic-list","title":"Topic list","text":"

    For collecting the GNSS/INS data, this repository is used.

    For collecting the LiDAR data, nebula repository is used.

    Topic Name Message Type /applanix/lvx_client/autoware_orientation autoware_sensing_msgs/msg/GnssInsOrientationStamped /applanix/lvx_client/imu_raw sensor_msgs/msg/Imu /localization/twist_estimator/twist_with_covariance geometry_msgs/msg/TwistWithCovarianceStamped /applanix/lvx_client/odom nav_msgs/msg/Odometry /applanix/lvx_client/gnss/fix sensor_msgs/msg/NavSatFix /clock rosgraph_msgs/msg/Clock /pandar_points sensor_msgs/msg/PointCloud2 /tf_static tf2_msgs/msg/TFMessage"},{"location":"datasets/#message-explanations","title":"Message Explanations","text":"

    Used drivers for sensors give output in default ROS 2 message types and their own ROS 2 message types for additional information. Following topics are the default ROS 2 message types:

    • /applanix/lvx_client/imu_raw

      • Gives the output of INS system in ENU. Due to the 9-axis IMU, yaw value demonstrates the heading value of the sensor.
    • /applanix/lvx_client/twist_with_covariance

      • Gives the twist output of the sensor.
    • /applanix/lvx_client/odom

      • Gives the position and orientation of the sensor from the starting point of the ROS 2 driver. Implemented with GeographicLib::LocalCartesian.

        This topic is not related to the wheel odometry.

    • /applanix/lvx_client/gnss/fix

      • Gives the latitude, longitude and height values of the sensors.

        Ellipsoidal height of WGS84 ellipsoid is given as height value.

    • /pandar_points

      • Gives the point cloud from the LiDAR sensor.
    "},{"location":"datasets/#bus-odd-operational-design-domain-datasets","title":"Bus-ODD (Operational Design Domain) datasets","text":""},{"location":"datasets/#leo-drive-isuzu-sensor-data","title":"Leo Drive - ISUZU sensor data","text":"

    This dataset contains data from the Isuzu bus used in the Bus ODD project.

    The data contains data from following sensors:

    • 1 x VLP16
    • 2 x VLP32C
    • 1 x Applanix POS LV 120 GNSS/INS
    • 3 x Lucid Vision Triton 5.4MP cameras (left, right, front)
    • Vehicle status report

    It also contains /tf topic for static transformations between sensors.

    "},{"location":"datasets/#required-message-types","title":"Required message types","text":"

    The GNSS data is available in sensor_msgs/msg/NavSatFix message type.

    But also the Applanix raw messages are also included in applanix_msgs/msg/NavigationPerformanceGsof50 and applanix_msgs/msg/NavigationSolutionGsof49 message types. In order to be able to play back these messages, you need to build and source the applanix_msgs package.

    # Create a workspace and clone the repository\nmkdir -p ~/applanix_ws/src && cd \"$_\"\ngit clone https://github.com/autowarefoundation/applanix.git\ncd ..\n\n# Build the workspace\ncolcon build --symlink-install --packages-select applanix_msgs\n\n# Source the workspace\nsource ~/applanix_ws/install/setup.bash\n\n# Now you can play back the messages\n

    Also make sure to source Autoware Universe workspace too.

    "},{"location":"datasets/#download-instructions","title":"Download instructions","text":"
    # Install awscli\n$ sudo apt update && sudo apt install awscli -y\n\n# This will download the entire dataset to the current directory.\n# (About 10.9GB of data)\n$ aws s3 sync s3://autoware-files/collected_data/2022-08-22_leo_drive_isuzu_bags/ ./2022-08-22_leo_drive_isuzu_bags  --no-sign-request\n\n# Optionally,\n# If you instead want to download a single bag file, you can get a list of the available files with following:\n$ aws s3 ls s3://autoware-files/collected_data/2022-08-22_leo_drive_isuzu_bags/ --no-sign-request\n   PRE all-sensors-bag1_compressed/\n   PRE all-sensors-bag2_compressed/\n   PRE all-sensors-bag3_compressed/\n   PRE all-sensors-bag4_compressed/\n   PRE all-sensors-bag5_compressed/\n   PRE all-sensors-bag6_compressed/\n   PRE driving_20_kmh_2022_06_10-16_01_55_compressed/\n   PRE driving_30_kmh_2022_06_10-15_47_42_compressed/\n\n# Then you can download a single bag file with the following:\naws s3 sync s3://autoware-files/collected_data/2022-08-22_leo_drive_isuzu_bags/all-sensors-bag1_compressed/ ./all-sensors-bag1_compressed  --no-sign-request\n
    "},{"location":"datasets/#autocoreai-lidar-ros-2-bag-file-and-pcap","title":"AutoCore.ai - lidar ROS 2 bag file and pcap","text":"

    This dataset contains pcap files and ros2 bag files from Ouster OS1-64 Lidar. The pcap file and ros2 bag file is recorded in the same time with slight difference in duration.

    Click here to download (~553MB)

    Reference Issue

    "},{"location":"datasets/data-anonymization/","title":"Rosbag2 Anonymizer","text":""},{"location":"datasets/data-anonymization/#rosbag2-anonymizer","title":"Rosbag2 Anonymizer","text":""},{"location":"datasets/data-anonymization/#overview","title":"Overview","text":"

    Autoware provides a tool (autoware_rosbag2_anonymizer) to anonymize ROS 2 bag files. This tool is useful when you want to share your data with Autoware community but want to keep the privacy of the data.

    With this tool you can blur any object (faces, license plates, etc.) in your bag files, and you can get a new bag file with the blurred images.

    "},{"location":"datasets/data-anonymization/#installation","title":"Installation","text":""},{"location":"datasets/data-anonymization/#clone-the-repository","title":"Clone the repository","text":"
    git clone https://github.com/autowarefoundation/autoware_rosbag2_anonymizer.git\ncd autoware_rosbag2_anonymizer\n
    "},{"location":"datasets/data-anonymization/#download-the-pretrained-models","title":"Download the pretrained models","text":"
    wget https://dl.fbaipublicfiles.com/segment_anything/sam_vit_h_4b8939.pth\n\nwget https://huggingface.co/ShilongLiu/GroundingDINO/resolve/main/GroundingDINO_SwinB.cfg.py\nwget https://huggingface.co/ShilongLiu/GroundingDINO/resolve/main/groundingdino_swinb_cogcoor.pth\n\nwget https://github.com/autowarefoundation/autoware_rosbag2_anonymizer/releases/download/v0.0.0/yolov8x_anonymizer.pt\nwget https://github.com/autowarefoundation/autoware_rosbag2_anonymizer/releases/download/v0.0.0/yolo_config.yaml\n
    "},{"location":"datasets/data-anonymization/#install-ros-2-mcap-dependencies-if-you-will-use-mcap-files","title":"Install ROS 2 mcap dependencies if you will use mcap files","text":"

    Warning

    Be sure you have installed the ROS 2 on your system.

    sudo apt install ros-humble-rosbag2-storage-mcap\n
    "},{"location":"datasets/data-anonymization/#install-autoware_rosbag2_anonymizer-tool","title":"Install autoware_rosbag2_anonymizer tool","text":"

    Before installing the tool, you should update the pip package manager.

    python3 -m pip install pip -U\n

    Then, you can install the tool with the following command.

    python3 -m pip install .\n
    "},{"location":"datasets/data-anonymization/#configuration","title":"Configuration","text":"

    Define prompts in the validation.json file. The tool will use these prompts to detect objects. You can add your prompts as dictionaries under the prompts key. Each dictionary should have two keys:

    • prompt: The prompt that will be used to detect the object. This prompt will be blurred in the anonymization process.
    • should_inside: This is a list of prompts that object should be inside. If the object is not inside the prompts, the tool will not blur the object.
    {\n\"prompts\": [\n{\n\"prompt\": \"license plate\",\n\"should_inside\": [\"car\", \"bus\", \"...\"]\n},\n{\n\"prompt\": \"human face\",\n\"should_inside\": [\"person\", \"human body\", \"...\"]\n}\n]\n}\n

    You should set your configuration in the configuration files under config folder according to the usage. Following instructions will guide you to set each configuration file.

    • config/anonymize_with_unified_model.yaml
    rosbag:\ninput_bags_folder: \"/path/to/input_bag_folder\" # Path to the input folder which contains ROS 2 bag files\noutput_bags_folder: \"/path/to/output_folder\" # Path to the output ROS 2 bag folder\noutput_save_compressed_image: True # Save images as compressed images (True or False)\noutput_storage_id: \"sqlite3\" # Storage id for the output bag file (`sqlite3` or `mcap`)\n\ngrounding_dino:\nbox_threshold: 0.1 # Threshold for the bounding box (float)\ntext_threshold: 0.1 # Threshold for the text (float)\nnms_threshold: 0.1 # Threshold for the non-maximum suppression (float)\n\nopen_clip:\nscore_threshold: 0.7 # Validity threshold for the OpenCLIP model (float\n\nyolo:\nconfidence: 0.15 # Confidence threshold for the YOLOv8 model (float)\n\nbbox_validation:\niou_threshold: 0.9 # Threshold for the intersection over union (float), if the intersection over union is greater than this threshold, the object will be selected as inside the validation prompt\n\nblur:\nkernel_size: 31 # Kernel size for the Gaussian blur (int)\nsigma_x: 11 # Sigma x for the Gaussian blur (int)\n
    • config/yolo_create_dataset.yaml
    rosbag:\ninput_bags_folder: \"/path/to/input_bag_folder\" # Path to the input ROS 2 bag files folder\n\ndataset:\noutput_dataset_folder: \"/path/to/output/dataset\" # Path to the output dataset folder\noutput_dataset_subsample_coefficient: 25 # Subsample coefficient for the dataset (int)\n\ngrounding_dino:\nbox_threshold: 0.1 # Threshold for the bounding box (float)\ntext_threshold: 0.1 # Threshold for the text (float)\nnms_threshold: 0.1 # Threshold for the non-maximum suppression (float)\n\nopen_clip:\nscore_threshold: 0.7 # Validity threshold for the OpenCLIP model (float\n\nbbox_validation:\niou_threshold: 0.9 # Threshold for the intersection over union (float), if the intersection over union is greater than this threshold, the object will be selected as inside the validation prompt\n
    • config/yolo_train.yaml
    dataset:\ninput_dataset_yaml: \"path/to/data.yaml\" # Path to the config file of the dataset, which is created in the previous step\n\nyolo:\nepochs: 100 # Number of epochs for the YOLOv8 model (int)\nmodel: \"yolov8x.pt\" # Select the base model for YOLOv8 ('yolov8x.pt' 'yolov8l.pt', 'yolov8m.pt', 'yolov8n.pt')\n
    • config/yolo_anonymize.yaml
    rosbag:\ninput_bag_path: \"/path/to/input_bag/bag.mcap\" # Path to the input ROS 2 bag file with 'mcap' or 'sqlite3' extension\noutput_bag_path: \"/path/to/output_bag_file\" # Path to the output ROS 2 bag folder\noutput_save_compressed_image: True # Save images as compressed images (True or False)\noutput_storage_id: \"sqlite3\" # Storage id for the output bag file (`sqlite3` or `mcap`)\n\nyolo:\nmodel: \"path/to/yolo/model\" # Path to the trained YOLOv8 model file (`.pt` extension) (you can download the pre-trained model from releases)\nconfig_path: \"path/to/input/data.yaml\" # Path to the config file of the dataset, which is created in the previous step\nconfidence: 0.15 # Confidence threshold for the YOLOv8 model (float)\n\nblur:\nkernel_size: 31 # Kernel size for the Gaussian blur (int)\nsigma_x: 11 # Sigma x for the Gaussian blur (int)\n
    "},{"location":"datasets/data-anonymization/#usage","title":"Usage","text":"

    The tool provides two options to anonymize images in ROS 2 bag files.

    Warning

    If your ROS 2 bag file includes custom message types from Autoware or any other packages, you should source the their workspaces before running the tool.

    You can source Autoware workspace with the following command.

    source /path/to/your/workspace/install/setup.bash\n
    "},{"location":"datasets/data-anonymization/#option-1-anonymize-with-unified-model","title":"Option 1: Anonymize with Unified Model","text":"

    You should provide a single rosbag and tool anonymize images in rosbag with a unified model. The model is a combination of GroundingDINO, OpenCLIP, YOLOv8 and SegmentAnything. If you don't want to use pre-trained YOLOv8 model, you can follow the instructions in the second option to train your own YOLOv8 model.

    You should set your configuration in config/anonymize_with_unified_model.yaml file.

    python3 main.py config/anonymize_with_unified_model.yaml --anonymize_with_unified_model\n
    "},{"location":"datasets/data-anonymization/#option-2-anonymize-using-the-yolov8-model-trained-on-a-dataset-created-with-the-unified-model","title":"Option 2: Anonymize Using the YOLOv8 Model Trained on a Dataset Created with the Unified Model","text":""},{"location":"datasets/data-anonymization/#step-1-create-a-dataset","title":"Step 1: Create a Dataset","text":"

    Create an initial dataset with the unified model. You can provide multiple ROS 2 bag files to create a dataset. After running the following command, the tool will create a dataset in YOLO format.

    You should set your configuration in config/yolo_create_dataset.yaml file.

    python3 main.py config/yolo_create_dataset.yaml --yolo_create_dataset\n
    "},{"location":"datasets/data-anonymization/#step-2-manually-label-the-missing-labels","title":"Step 2: Manually Label the Missing Labels","text":"

    The dataset which is created in the first step has some missing labels. You should label the missing labels manually. You can use the following example tools to label the missing labels:

    • label-studio
    • Roboflow (You can use the free version)
    "},{"location":"datasets/data-anonymization/#step-3-split-the-dataset","title":"Step 3: Split the Dataset","text":"

    Split the dataset into training and validation sets. Give the path to the dataset folder which is created in the first step.

    autoware-rosbag2-anonymizer-split-dataset /path/to/dataset/folder\n
    "},{"location":"datasets/data-anonymization/#step-4-train-the-yolov8-model","title":"Step 4: Train the YOLOv8 Model","text":"

    Train the YOLOv8 model with the dataset which is created in the first step.

    You should set your configuration in config/yolo_train.yaml file.

    python3 main.py config/yolo_train.yaml --yolo_train\n
    "},{"location":"datasets/data-anonymization/#step-5-anonymize-images-in-ros-2-bag-files","title":"Step 5: Anonymize Images in ROS 2 Bag Files","text":"

    Anonymize images in ROS 2 bag files with the trained YOLOv8 model. If you want to anonymize your ROS 2 bag file with only YOLOv8 model, you should use following command. But we recommend to use the unified model for better results. You can follow the Option 1 for the unified model with the YOLOv8 model trained by you.

    You should set your configuration in config/yolo_anonymize.yaml file.

    python3 main.py config/yolo_anonymize.yaml --yolo_anonymize\n
    "},{"location":"datasets/data-anonymization/#troubleshooting","title":"Troubleshooting","text":"
    • Error 1: torch.OutOfMemoryError: CUDA out of memory
    torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 1024.00 MiB. GPU 0 has a total capacity of 10.87 GiB of which 1010.88 MiB is free. Including non-PyTorch memory, this process has 8.66 GiB memory in use. Of the allocated memory 8.21 GiB is allocated by PyTorch, and 266.44 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation.  See documentation for Memory Management  (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)\n

    This error occurs when the GPU memory is not enough to run the model. You can add the following environment variable to avoid this error.

    export PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True\n
    "},{"location":"datasets/data-anonymization/#share-your-anonymized-data","title":"Share Your Anonymized Data","text":"

    After anonymizing your data, you can share your anonymized data with the Autoware community. If you want to share your data with the Autoware community, you should create an issue and pull request to the Autoware Documentation repository.

    "},{"location":"datasets/data-anonymization/#citation","title":"Citation","text":"
    @article{liu2023grounding,\ntitle={Grounding dino: Marrying dino with grounded pre-training for open-set object detection},\nauthor={Liu, Shilong and Zeng, Zhaoyang and Ren, Tianhe and Li, Feng and Zhang, Hao and Yang, Jie and Li, Chunyuan and Yang, Jianwei and Su, Hang and Zhu, Jun and others},\njournal={arXiv preprint arXiv:2303.05499},\nyear={2023}\n}\n
    @article{kirillov2023segany,\ntitle={Segment Anything},\nauthor={Kirillov, Alexander and Mintun, Eric and Ravi, Nikhila and Mao, Hanzi and Rolland, Chloe and Gustafson, Laura and Xiao, Tete and Whitehead, Spencer and Berg, Alexander C. and Lo, Wan-Yen and Doll{\\'a}r, Piotr and Girshick, Ross},\njournal={arXiv:2304.02643},\nyear={2023}\n}\n
    @software{ilharco_gabriel_2021_5143773,\nauthor       = {Ilharco, Gabriel and\n                  Wortsman, Mitchell and\n                  Wightman, Ross and\n                  Gordon, Cade and\n                  Carlini, Nicholas and\n                  Taori, Rohan and\n                  Dave, Achal and\n                  Shankar, Vaishaal and\n                  Namkoong, Hongseok and\n                  Miller, John and\n                  Hajishirzi, Hannaneh and\n                  Farhadi, Ali and\n                  Schmidt, Ludwig},\ntitle        = {OpenCLIP},\nmonth        = jul,\nyear         = 2021,\nnote         = {If you use this software, please cite it as below.},\npublisher    = {Zenodo},\nversion      = {0.1},\ndoi          = {10.5281/zenodo.5143773},\nurl          = {https://doi.org/10.5281/zenodo.5143773}\n}\n
    "},{"location":"design/","title":"Autoware's Design","text":""},{"location":"design/#autowares-design","title":"Autoware's Design","text":""},{"location":"design/#architecture","title":"Architecture","text":"

    Core and Universe.

    Autoware provides the runtimes and technology components by open-source software. The runtimes are based on the Robot Operating System (ROS). The technology components are provided by contributors, which include, but are not limited to:

    • Sensing
      • Camera Component
      • LiDAR Component
      • RADAR Component
      • GNSS Component
    • Computing
      • Localization Component
      • Perception Component
      • Planning Component
      • Control Component
      • Logging Component
      • System Monitoring Component
    • Actuation
      • DBW Component
    • Tools
      • Simulator Component
      • Mapping Component
      • Remote Component
      • ML Component
      • Annotation Component
      • Calibration Component
    "},{"location":"design/#concern-assumption-and-limitation","title":"Concern, Assumption, and Limitation","text":"

    The downside of the microautonomy architecture is that the computational performance of end applications is sacrificed due to its data path overhead attributed to functional modularity. In other words, the trade-off characteristic of the microautonomy architecture exists between computational performance and functional modularity. This trade-off problem can be solved technically by introducing real-time capability. This is because autonomous driving systems are not really designed to be real-fast, that is, low-latency computing is nice-to-have but not must-have. The must-have feature for autonomous driving systems is that the latency of computing is predictable, that is, the systems are real-time. As a whole, we can compromise computational performance to an extent that is predictable enough to meet the given timing constraints of autonomous driving systems, often referred to as deadlines of computation.

    "},{"location":"design/#design","title":"Design","text":"

    Warning

    Under Construction

    "},{"location":"design/#autoware-concepts","title":"Autoware concepts","text":"

    The Autoware concepts page describes the design philosophy of Autoware. Readers (service providers and all Autoware users) will learn the basic concepts underlying Autoware development, such as microautonomy and the Core/Universe architecture.

    "},{"location":"design/#autoware-architecture","title":"Autoware architecture","text":"

    The Autoware architecture page describes an overview of each module that makes up Autoware. Readers (all Autoware users) will gain a high-level picture of how each module that composes Autoware works.

    "},{"location":"design/#autoware-interfaces","title":"Autoware interfaces","text":"

    The Autoware interfaces page describes in detail the interface of each module that makes up Autoware. Readers (intermediate developers) will learn how to add new functionality to Autoware and how to integrate their own modules with Autoware.

    "},{"location":"design/#configuration-management","title":"Configuration management","text":""},{"location":"design/#conclusion","title":"Conclusion","text":""},{"location":"design/autoware-architecture/","title":"Architecture overview","text":""},{"location":"design/autoware-architecture/#architecture-overview","title":"Architecture overview","text":"

    This page describes the architecture of Autoware.

    "},{"location":"design/autoware-architecture/#introduction","title":"Introduction","text":"

    The current Autoware is defined to be a layered architecture that clarifies each module's role and simplifies the interface between them. By doing so:

    • Autoware's internal processing becomes more transparent.
    • Collaborative development is made easier because of the reduced interdependency between modules.
    • Users can easily replace an existing module (e.g. localization) with their own software component by simply wrapping their software to fit in with Autoware's interface.

    Note that the initial focus of this architecture design was solely on driving capability, and so the following features were left as future work:

    • Fail safe
    • Human Machine Interface
    • Real-time processing
    • Redundant system
    • State monitoring system
    "},{"location":"design/autoware-architecture/#high-level-architecture-design","title":"High-level architecture design","text":"

    Autoware's architecture consists of the following seven stacks. Each linked page contains a more detailed set of requirements and use cases specific to that stack:

    • Sensing design
    • Map design
    • Localization design
    • Perception design
    • Planning design
    • Control design
    • Vehicle Interface design
    "},{"location":"design/autoware-architecture/#node-diagram","title":"Node diagram","text":"

    A diagram showing Autoware's nodes in the default configuration can be found on the Node diagram page. Detailed documents for each node are available in the Autoware Universe docs.

    Note that Autoware configurations are scalable / selectable and will vary depending on the environment and required use cases.

    "},{"location":"design/autoware-architecture/#references","title":"References","text":"
    • The architecture presentation given to the AWF Technical Steering Committee, March 2020
    "},{"location":"design/autoware-architecture/control/","title":"Control component design","text":""},{"location":"design/autoware-architecture/control/#control-component-design","title":"Control component design","text":""},{"location":"design/autoware-architecture/control/#abstract","title":"Abstract","text":"

    This document presents the design concept of the Control Component. The content is as follows:

    • Autoware Control Design
      • Outlining the policy for Autoware's control, which deals with only general information for autonomous driving systems and provides generic control commands to the vehicle.
    • Vehicle Adaptation Design
      • Describing the policy for vehicle adaptation, which utilizes adapter mechanisms to standardize the characteristics of the vehicle's drive system and integrate it with Autoware.
    • Control Feature Design
      • Demonstrating the features provided by Autoware's control.
      • Presenting the approach towards the functions installed in the vehicle such as ABS.
    "},{"location":"design/autoware-architecture/control/#autoware-control-design","title":"Autoware Control Design","text":"

    The Control Component generates the control signal to which the Vehicle Component subscribes. The generated control signals are computed based on the reference trajectories from the Planning Component.

    The Control Component consists of two modules. The trajectory_follower module generates a vehicle control command to follow the reference trajectory received from the planning module. The command includes, for example, the desired steering angle and target speed. The vehicle_command_gate is responsible for filtering the control command to prevent abnormal values and then sending it to the vehicle. This gate also allows switching between multiple sources such as the MRM (minimal risk maneuver) module or some remote control module, in addition to the trajectory follower.

    The Autoware control system is designed as a platform for automated driving systems that can be compatible with a diverse range of vehicles.

    The control process in Autoware uses general information (such as target acceleration and deceleration) and no vehicle-specific information (such as brake pressure) is used. Hence it can be adjusted independently of the vehicle's drive interface enabling easy integration or performance tuning.

    Furthermore, significant differences that affect vehicle motion constraints, such as two-wheel steering or four-wheel steering, are addressed by switching the control vehicle model, achieving control specialized for each characteristic.

    Autoware's control module outputs the necessary information to control the vehicle as a substitute for a human driver. For example, the control command from the control module looks like the following:

    - Target steering angle\n- Target steering torque\n- Target speed\n- Target acceleration\n

    Note that vehicle-specific values such as pedal positions and low-level information such as individual wheel rotation speeds are excluded from the command.

    "},{"location":"design/autoware-architecture/control/#vehicle-adaptation-design","title":"Vehicle Adaptation Design","text":""},{"location":"design/autoware-architecture/control/#vehicle-interface-adapter","title":"Vehicle interface adapter","text":"

    Autoware is designed to be an autonomous driving platform able to accommodate vehicles with various drivetrain types.

    This is an explanation of how Autoware handles the standardization of systems with different vehicle drivetrain. The interfaces for vehicle drivetrain are diverse, including steering angle, steering angular velocity, steering torque, speed, accel/brake pedals, and brake pressure. To accommodate these differences, Autoware adds an adapter module between the control component and the vehicle interface. This module performs the conversion between the proprietary message types used by the vehicle (such as brake pressure) and the generic types used by Autoware (such as desired acceleration). By providing this conversion information, the differences in vehicle drivetrain can be accommodated.

    If the information is not known in advance, an automatic calibration tool can be used. Calibration will occur within limited degrees of freedom, generating the information necessary for the drivetrain conversion automatically.

    This configuration is summarized in the following diagram.

    "},{"location":"design/autoware-architecture/control/#examples-of-several-vehicle-interfaces","title":"Examples of several vehicle interfaces","text":"

    This is an example of the several drivetrain types in the vehicle interface.

    Vehicle Lateral interface Longitudinal interface Note Lexus Steering angle Accel/brake pedal position Acceleration lookup table conversion for longitudinal JPN TAXI Steering angle Accel/brake pedal position Acceleration lookup table conversion for longitudinal GSM8 Steering EPS voltage Acceleration motor voltage, Deceleration brake hydraulic pressure lookup table and PID conversion for lateral and longitudinal YMC Golfcart Steering angle Velocity Logiee yaw rate Velocity F1 TENTH Steering angle Motor RPM interface code"},{"location":"design/autoware-architecture/control/#control-feature-design","title":"Control Feature Design","text":"

    The following lists the features provided by Autoware's Control/Vehicle component, as well as the conditions and assumptions required to utilize them effectively.

    The proper operation of the ODD is limited by factors such as whether the functions are enabled, delay time, calibration accuracy and degradation rate, and sensor accuracy.

    Feature Description\u3000 Requirements/Assumptions Note \u3000Limitation for now Lateral Control Control the drivetrain system related to lateral vehicle motion Trying to increase the number of vehicle types that can be supported in the future. Only front-steering type is supported. Longitudinal Control Control the drivetrain system related to longitudinal vehicle motion Slope Compensation Supports precise vehicle motion control on slopes Gradient information can be obtained from maps or sensors attached to the chassis If gradient information is not available, the gradient is estimated from the vehicle's pitch angle. Delay Compensation Controls the drivetrain system appropriately in the presence of time delays The drivetrain delay information is provided in advance If there is no delay information, the drivetrain delay is estimated automatically (automatic calibration). However, the effect of delay cannot be completely eliminated, especially in scenarios with sudden changes in speed. Only fixed delay times can be set for longitudinal and lateral drivetrain systems separately. It does not accommodate different delay times for the accelerator and brake. Drivetrain IF Conversion (Lateral Control) Converts the drivetrain-specific information of the vehicle into the drivetrain information used by Autoware (e.g., target steering angular velocity \u2192 steering torque) The conversion information is provided in advance If there is no conversion information, the conversion map is estimated automatically (automatic calibration). The degree of freedom for conversion is limited (2D lookup table + PID FB). Drivetrain IF Conversion (Longitudinal Control) Converts the drivetrain-specific information of the vehicle into the drivetrain information used by Autoware (e.g., target acceleration \u2192 accelerator/brake pedal value) The conversion information is provided in advance If there is no conversion information, the conversion map is estimated automatically (automatic calibration). The degree of freedom for conversion is limited (2D lookup table + PID FB). Automatic Calibration Automatically estimates and applies values such as drivetrain IF conversion map and delay time. The drivetrain status can be obtained (must) Anomaly Detection Notifies when there is a discrepancy in the calibration or unexpected drivetrain behavior The drivetrain status can be obtained (must) Steering Zero Point Correction Corrects the midpoint of the steering to achieve appropriate steering control The drivetrain status can be obtained (must) Steering Deadzone Correction Corrects the deadzone of the steering to achieve appropriate steering control The steering deadzone parameter is provided in advance If the parameter is unknown, the deadzone parameter is estimated from driving information Not available now Steering Deadzone Estimation Dynamically estimates the steering deadzone from driving data Not available now Weight Compensation Performs appropriate vehicle control according to weight Weight information can be obtained from sensors If there is no weight sensor, estimate the weight from driving information. Currently not available Weight Estimation Dynamically estimates weight from driving data Currently not available

    The list above does not cover wheel control systems such as ABS commonly used in vehicles. Regarding these features, the following considerations are taken into account.

    "},{"location":"design/autoware-architecture/control/#integration-with-vehicle-side-functions","title":"Integration with vehicle-side functions","text":"

    ABS (Anti-lock Brake System) and ESC (Electric Stability Control) are two functions that may be pre-installed on a vehicle, directly impacting its controllability. The control modules of Autoware assume that both ABS and ESC are installed on the vehicle and their absence may cause unreliable controls depending on the target ODD. For example, with low-velocity driving in a controlled environment, these functions are not necessary.

    Also, note that this statement does not negate the development of ABS functionality in autonomous driving systems.

    "},{"location":"design/autoware-architecture/control/#autoware-capabilities-and-vehicle-requirements","title":"Autoware Capabilities and Vehicle Requirements","text":"

    As an alternative to human driving, autonomous driving systems essentially aim to handle tasks that humans can perform. This includes not only controlling the steering wheel, accel, and brake, but also automatically detecting issues such as poor brake response or a misaligned steering angle. However, this is a trade-off, as better vehicle performance will lead to superior system behavior, ultimately affecting the design of ODD.

    On the other hand, for tasks that are not typically anticipated or cannot be handled by a human driver, processing in the vehicle ECU is expected. Examples of such scenarios include cases where the brake response is clearly delayed or when the vehicle rotates due to a single-side tire slipping. These tasks are typically handled by ABS or ESC.

    "},{"location":"design/autoware-architecture/localization/","title":"Index","text":"

    LOCALIZATION COMPONENT DESIGN DOC

    "},{"location":"design/autoware-architecture/localization/#abstract","title":"Abstract","text":""},{"location":"design/autoware-architecture/localization/#1-requirements","title":"1. Requirements","text":"

    Localization aims to estimate vehicle pose, velocity, and acceleration.

    Goals:

    • Propose a system that can estimate vehicle pose, velocity, and acceleration for as long as possible.
    • Propose a system that can diagnose the stability of estimation and send a warning message to the error-monitoring system if the estimation result is unreliable.
    • Design a vehicle localization function that can work with various sensor configurations.

    Non-goals:

    • This design document does not aim to develop a localization system that
      • is infallible in all environments
      • works outside of the pre-defined ODD (Operational Design Domain)
      • has better performance than is required for autonomous driving
    "},{"location":"design/autoware-architecture/localization/#2-sensor-configuration-examples","title":"2. Sensor Configuration Examples","text":"

    This section shows example sensor configurations and their expected performances. Each sensor has its own advantages and disadvantages, but overall performance can be improved by fusing multiple sensors.

    "},{"location":"design/autoware-architecture/localization/#3d-lidar-pointcloud-map","title":"3D-LiDAR + PointCloud Map","text":""},{"location":"design/autoware-architecture/localization/#expected-situation","title":"Expected situation","text":"
    • The vehicle is located in a structure-rich environment, such as an urban area
    "},{"location":"design/autoware-architecture/localization/#situations-that-can-make-the-system-unstable","title":"Situations that can make the system unstable","text":"
    • The vehicle is placed in a structure-less environment, such as a rural landscape, highway, or tunnel
    • Environmental changes have occurred since the map was created, such as snow cover or the construction/destruction of buildings.
    • Surrounding objects are occluded
    • The car is surrounded by objects undetectable by LiDAR, e.g., glass windows, reflections, or absorption (dark objects)
    • The environment contains laser beams at the same frequency as the car's LiDAR sensor(s)
    "},{"location":"design/autoware-architecture/localization/#functionality","title":"Functionality","text":"
    • The system can estimate the vehicle location on the point cloud map with the error of ~10cm.
    • The system is operable at night.
    "},{"location":"design/autoware-architecture/localization/#3d-lidar-or-camera-vector-map","title":"3D-LiDAR or Camera + Vector Map","text":""},{"location":"design/autoware-architecture/localization/#expected-situation_1","title":"Expected situation","text":"
    • Road with clear white lines and loose curvatures, such as a highway or an ordinary local road.
    "},{"location":"design/autoware-architecture/localization/#situations-that-can-make-the-system-unstable_1","title":"Situations that can make the system unstable","text":"
    • White lines are scratchy or covered by rain or snow
    • Tight curvature such as intersections
    • Large reflection change of the road surface caused by rain or paint
    "},{"location":"design/autoware-architecture/localization/#functionalities","title":"Functionalities","text":"
    • Correct vehicle positions along the lateral direction.
    • Pose correction along the longitudinal can be inaccurate, but can be resolved by fusing with GNSS.
    "},{"location":"design/autoware-architecture/localization/#gnss","title":"GNSS","text":""},{"location":"design/autoware-architecture/localization/#expected-situation_2","title":"Expected situation","text":"
    • The vehicle is placed in an open environment with few to no surrounding objects, such as a rural landscape.
    "},{"location":"design/autoware-architecture/localization/#situation-that-can-make-the-system-unstable","title":"Situation that can make the system unstable","text":"
    • GNSS signals are blocked by surrounding objects, e.g., tunnels or buildings.
    "},{"location":"design/autoware-architecture/localization/#functionality_1","title":"Functionality","text":"
    • The system can estimate vehicle position in the world coordinate within an error of ~10m.
    • With a RKT-GNSS (Real Time Kinematic Global Navigation Satellite System) attached, the accuracy can be improved to ~10cm.
    • A system with this configuration can work without environment maps (both point cloud and vector map types).
    "},{"location":"design/autoware-architecture/localization/#camera-visual-odometry-visual-slam","title":"Camera (Visual Odometry, Visual SLAM)","text":""},{"location":"design/autoware-architecture/localization/#expected-situation_3","title":"Expected situation","text":"
    • The vehicle is placed in an environment with rich visual features, such as an urban area.
    "},{"location":"design/autoware-architecture/localization/#situations-that-can-make-the-system-unstable_2","title":"Situations that can make the system unstable","text":"
    • The vehicle is placed in a texture-less environment.
    • The vehicle is surrounded by other objects.
    • The camera observes significant illumination changes, such as those caused by sunshine, headlights from other vehicles or when approaching the exit of a tunnel.
    • The vehicle is placed in a dark environment.
    "},{"location":"design/autoware-architecture/localization/#functionality_2","title":"Functionality","text":"
    • The system can estimate odometry by tracking visual features.
    "},{"location":"design/autoware-architecture/localization/#wheel-speed-sensor","title":"Wheel speed sensor","text":""},{"location":"design/autoware-architecture/localization/#expected-situation_4","title":"Expected situation","text":"
    • The vehicle is running on a flat and smooth road.
    "},{"location":"design/autoware-architecture/localization/#situations-that-can-make-the-system-unstable_3","title":"Situations that can make the system unstable","text":"
    • The vehicle is running on a slippery or bumpy road, which can cause incorrect observations of wheel speed.
    "},{"location":"design/autoware-architecture/localization/#functionality_3","title":"Functionality","text":"
    • The system can acquire the vehicle velocity and estimate distance traveled.
    "},{"location":"design/autoware-architecture/localization/#imu","title":"IMU","text":""},{"location":"design/autoware-architecture/localization/#expected-environments","title":"Expected environments","text":"
    • Flat, smooth roads
    "},{"location":"design/autoware-architecture/localization/#situations-that-can-make-the-system-unstable_4","title":"Situations that can make the system unstable","text":"
    • IMUs have a bias1 that is dependent on the surrounding temperature, and can cause incorrect sensor observation or odometry drift.
    "},{"location":"design/autoware-architecture/localization/#functionality_4","title":"Functionality","text":"
    • The system can observe acceleration and angular velocity.
    • By integrating these observations, the system can estimate the local pose change and realize dead-reckoning
    "},{"location":"design/autoware-architecture/localization/#geomagnetic-sensor","title":"Geomagnetic sensor","text":""},{"location":"design/autoware-architecture/localization/#expected-situation_5","title":"Expected situation","text":"
    • The vehicle is placed in an environment with low magnetic noise
    "},{"location":"design/autoware-architecture/localization/#situations-that-can-make-the-system-unstable_5","title":"Situations that can make the system unstable","text":"
    • The vehicle is placed in an environment with high magnetic noise, such as one containing buildings or structures with reinforced steel or other materials that generate electromagnetic waves.
    "},{"location":"design/autoware-architecture/localization/#functionality_5","title":"Functionality","text":"
    • The system can estimate the vehicle's direction in the world coordinate system.
    "},{"location":"design/autoware-architecture/localization/#magnetic-markers","title":"Magnetic markers","text":""},{"location":"design/autoware-architecture/localization/#expected-situation_6","title":"Expected situation","text":"
    • The car is placed in an environment with magnetic markers installed.
    "},{"location":"design/autoware-architecture/localization/#situations-where-the-system-becomes-unstable","title":"Situations where the system becomes unstable","text":"
    • The markers are not maintained.
    "},{"location":"design/autoware-architecture/localization/#functionality_6","title":"Functionality","text":"
    • Vehicle location can be obtained on the world coordinate by detecting the magnetic markers.
    • The system can work even if the road is covered with snow.
    "},{"location":"design/autoware-architecture/localization/#3-requirements","title":"3. Requirements","text":"
    • By implementing different modules, various sensor configurations and algorithms can be used.
    • The localization system can start pose estimation from an ambiguous initial location.
    • The system can produce a reliable initial location estimation.
    • The system can manage the state of the initial location estimation (uninitialized, initializable, or non-initializable) and can report to the error monitor.
    "},{"location":"design/autoware-architecture/localization/#4-architecture","title":"4. Architecture","text":""},{"location":"design/autoware-architecture/localization/#abstract_1","title":"Abstract","text":"

    Two architectures are defined, \"Required\" and \"Recommended\". However, the \"Required\" architecture only contains the inputs and outputs necessary to accept various localization algorithms. To improve the reusability of each module, the required components are defined in the \"Recommended\" architecture section along with a more detailed explanation.

    "},{"location":"design/autoware-architecture/localization/#required-architecture","title":"Required Architecture","text":""},{"location":"design/autoware-architecture/localization/#input","title":"Input","text":"
    • Sensor message
      • e.g., LiDAR, camera, GNSS, IMU, CAN Bus, etc.
      • Data types should be ROS primitives for reusability
    • Map data
      • e.g., point cloud map, lanelet2 map, feature map, etc.
      • The map format should be chosen based on use case and sensor configuration
      • Note that map data is not required for some specific cases (e.g., GNSS-only localization)
    • tf, static_tf
      • map frame
      • base_link frame
    "},{"location":"design/autoware-architecture/localization/#output","title":"Output","text":"
    • Pose with covariance stamped
      • Vehicle pose, covariance, and timestamp on the map coordinate
      • 50Hz~ frequency (depending on the requirements of the Planning and Control components)
    • Twist with covariance stamped
      • Vehicle velocity, covariance, and timestamp on the base_link coordinate
      • 50Hz~ frequency
    • Accel with covariance stamped
      • Acceleration, covariance, and timestamp on the base_link coordinate
      • 50Hz~ frequency
    • Diagnostics
      • Diagnostics information that indicates if the localization module works properly
    • tf
      • tf of map to base_link
    "},{"location":"design/autoware-architecture/localization/#recommended-architecture","title":"Recommended Architecture","text":""},{"location":"design/autoware-architecture/localization/#pose-estimator","title":"Pose Estimator","text":"
    • Estimates the vehicle pose on the map coordinate by matching external sensor observation to the map
    • Provides the obtained pose and its covariance to PoseTwistFusionFilter
    "},{"location":"design/autoware-architecture/localization/#twist-accel-estimator","title":"Twist-Accel Estimator","text":"
    • Produces the vehicle velocity, angular velocity, acceleration, angular acceleration, and their covariances
      • It is possible to create a single module for both twist and acceleration or to create two separate modules - the choice of architecture is up to the developer
    • The twist estimator produces velocity and angular velocity from internal sensor observation
    • The accel estimator produces acceleration and angular acceleration from internal sensor observations
    "},{"location":"design/autoware-architecture/localization/#kinematics-fusion-filter","title":"Kinematics Fusion Filter","text":"
    • Produces the likeliest pose, velocity, acceleration, and their covariances, computed by fusing two kinds of information:
      • The pose obtained from the pose estimator.
      • The velocity and acceleration obtained from the twist-accel estimator
    • Produces tf of map to base_link according to the pose estimation result
    "},{"location":"design/autoware-architecture/localization/#localization-diagnostics","title":"Localization Diagnostics","text":"
    • Monitors and guarantees the stability and reliability of pose estimation by fusing information obtained from multiple localization modules
    • Reports error status to the error monitor
    "},{"location":"design/autoware-architecture/localization/#tf-tree","title":"TF tree","text":"frame meaning earth ECEF (Earth Centered Earth Fixed\uff09 map Origin of the map coordinate (ex. MGRS origin) viewer User-defined frame for rviz base_link Reference pose of the ego-vehicle (projection of the rear-axle center onto the ground surface) sensor Reference pose of each sensor

    Developers can optionally add other frames such as odom or base_footprint as long as the tf structure above is maintained.

    "},{"location":"design/autoware-architecture/localization/#the-localization-modules-ideal-functionality","title":"The localization module's ideal functionality","text":"
    • The localization module should provide pose, velocity, and acceleration for control, planning, and perception.
    • Latency and stagger should be sufficiently small or adjustable such that the estimated values can be used for control within the ODD (Operational Design Domain).
    • The localization module should produce the pose on a fixed coordinate frame.
    • Sensors should be independent of each other so that they can be easily replaced.
    • The localization module should provide a status indicating whether or not the autonomous vehicle can operate with the self-contained function or map information.
    • Tools or manuals should describe how to set proper parameters for the localization module
    • Valid calibration parameters should be provided to align different frame or pose coordinates and sensor timestamps.
    "},{"location":"design/autoware-architecture/localization/#kpi","title":"KPI","text":"

    To maintain sufficient pose estimation performance for safe operation, the following metrics are considered:

    • Safety
      • The distance traveled within the ODD where pose estimation met the required accuracy, divided by the overall distance traveled within the ODD, as a percentage.
      • The anomaly detection rate for situations where the localization module cannot estimate pose within the ODD
      • The accuracy of detecting when the vehicle goes outside of the ODD, as a percentage.
    • Computational load
    • Latency
    "},{"location":"design/autoware-architecture/localization/#5-interface-and-data-structure","title":"5. Interface and Data Structure","text":""},{"location":"design/autoware-architecture/localization/#6-concerns-assumptions-and-limitations","title":"6. Concerns, Assumptions, and Limitations","text":""},{"location":"design/autoware-architecture/localization/#prerequisites-of-sensors-and-inputs","title":"Prerequisites of sensors and inputs","text":""},{"location":"design/autoware-architecture/localization/#sensor-prerequisites","title":"Sensor prerequisites","text":"
    • Input data is not defective.
      • Internal sensor observation such as IMU continuously keeps the proper frequency.
    • Input data has correct and exact time stamps.
      • Estimated poses can be inaccurate or unstable if the timestamps are not exact.
    • Sensors are correctly mounted with exact positioning and accessible from TF.
      • If the sensor positions are inaccurate, estimation results may be incorrect or unstable.
      • A sensor calibration framework is required to properly obtain the sensor positions.
    "},{"location":"design/autoware-architecture/localization/#map-prerequisites","title":"Map prerequisites","text":"
    • Sufficient information is contained within the map.
      • Pose estimation might be unstable if there is insufficient information in the map.
      • A testing framework is necessary to check if the map has adequate information for pose estimation.
    • Map does not differ greatly from the actual environment.
      • Pose estimation might be unstable if the actual environment has different objects from the map.
      • Maps need updates according to new objects and seasonal changes.
    • Maps must be aligned to a uniform coordinate, or an alignment framework is in place.
      • If multiple maps with different coordinate systems are used, the misalignment between them can affect the localization performance.
    "},{"location":"design/autoware-architecture/localization/#computational-resources","title":"Computational resources","text":"
    • Sufficient computational resources should be provided to maintain accuracy and computation speed.
    1. For more details about bias, refer to the VectorNav IMU specifications page.\u00a0\u21a9

    "},{"location":"design/autoware-architecture/map/","title":"Map component design","text":""},{"location":"design/autoware-architecture/map/#map-component-design","title":"Map component design","text":""},{"location":"design/autoware-architecture/map/#1-overview","title":"1. Overview","text":"

    Autoware relies on high-definition point cloud maps and vector maps of the driving environment to perform various tasks such as localization, route planning, traffic light detection, and predicting the trajectories of pedestrians and other vehicles.

    This document describes the design of map component of Autoware, including its requirements, architecture design, features, data formats, and interface to distribute map information to the rest of autonomous driving stack.

    "},{"location":"design/autoware-architecture/map/#2-requirements","title":"2. Requirements","text":"

    Map should provide two types of information to the rest of the stack:

    • Semantic information about roads as a vector map
    • Geometric information about the environment as a point cloud map (optional)

    A vector map contains highly accurate information about a road network, lane geometry, and traffic lights. It is required for route planning, traffic light detection, and predicting the trajectories of other vehicles and pedestrians.

    A 3D point cloud map is primarily used for LiDAR-based localization and part of perception in Autoware. In order to determine the current position and orientation of the vehicle, a live scan captured from one or more LiDAR units is matched against a pre-generated 3D point cloud map. Therefore, an accurate point cloud map is crucial for good localization results. However, if the vehicle has an alternate localization method with enough accuracy, for example using camera-based localization, point cloud map may not be required to use Autoware.

    In addition to above two types of maps, Autoware also requires a supplemental file for specifying the coordinate system of the map in geodetic system.

    "},{"location":"design/autoware-architecture/map/#3-architecture","title":"3. Architecture","text":"

    This diagram describes the high-level architecture of Map component in Autoware.

    The Map component consists of the following sub-components:

    • Point Cloud Map Loading: Load and publish point cloud map
    • Vector Map Loading: Load and publish vector map
    • Projection Loading: Load and publish projection information for conversion between local coordinate (x, y, z) and geodetic coordinate (latitude, longitude, altitude)
    "},{"location":"design/autoware-architecture/map/#4-component-interface","title":"4. Component interface","text":""},{"location":"design/autoware-architecture/map/#input-to-the-map-component","title":"Input to the map component","text":"
    • From file system
      • Point cloud map and its metadata file
      • Vector map
      • Projection information
    "},{"location":"design/autoware-architecture/map/#output-from-the-map-component","title":"Output from the map component","text":"
    • To Sensing
      • Projection information: Used to convert GNSS data from geodetic coordinate system to local coordinate system
    • To Localization
      • Point cloud map: Used for LiDAR-based localization
      • Vector map: Used for localization methods based on road markings, etc
    • To Perception
      • Point cloud map: Used for obstacle segmentation by comparing LiDAR and point cloud map
      • Vector map: Used for vehicle trajectory prediction
    • To Planning
      • Vector map: Used for behavior planning
    • To API layer
      • Projection information: Used to convert localization results from local coordinate system to geodetic coordinate system
    "},{"location":"design/autoware-architecture/map/#5-map-specification","title":"5. Map Specification","text":""},{"location":"design/autoware-architecture/map/#point-cloud-map","title":"Point Cloud Map","text":"

    The point cloud map must be supplied as a file with the following requirements:

    • The point cloud map must be projected on the same coordinate defined in map_projection_loader in order to be consistent with the lanelet2 map and other packages that converts between local and geodetic coordinates. For more information, please refer to the readme of map_projection_loader.
    • It must be in the PCD (Point Cloud Data) file format, but can be a single PCD file or divided into multiple PCD files.
    • Each point in the map must contain X, Y, and Z coordinates.
    • An intensity or RGB value for each point may be optionally included.
    • It must cover the entire operational area of the vehicle. It is also recommended to include an additional buffer zone according to the detection range of sensors attached to the vehicle.
    • Its resolution should be at least 0.2 m to yield reliable localization results.
    • It can be in either local or global coordinates, but must be in global coordinates (georeferenced) to use GNSS data for localization.

    For more details on divided map format, please refer to the readme of map_loader in Autoware Universe.

    Note

    Three global coordinate systems are currently supported by Autoware, including Military Grid Reference System (MGRS), Universal Transverse Mercator (UTM), and Japan Rectangular Coordinate System. However, MGRS is a preferred coordinate system for georeferenced maps. In a map with MGRS coordinate system, the X and Y coordinates of each point represent the point's location within the 100,000-meter square, while the Z coordinate represents the point's elevation.

    "},{"location":"design/autoware-architecture/map/#vector-map","title":"Vector Map","text":"

    The vector cloud map must be supplied as a file with the following requirements:

    • It must be in Lanelet2 format, with additional modifications required by Autoware.
    • It must contain the shape and position information of lanes, traffic lights, stop lines, crosswalks, parking spaces, and parking lots.
    • Except at the beginning or end of a road, each lanelet in the map must be correctly connected to its predecessor, successors, left neighbor, and right neighbor.
    • Each lanelet in the map must contain traffic rule information including its speed limit, right of way, traffic direction, associated traffic lights, stop lines, and traffic signs.
    • It must cover the entire operational area of the vehicle.

    For detailed specifications on Vector Map creation, please refer to Vector Map Creation Requirement Specification document.

    "},{"location":"design/autoware-architecture/map/#projection-information","title":"Projection Information","text":"

    The projection information must be supplied as a file with the following requirements:

    • It must be in YAML format, provided into map_projection_loader in current Autoware Universe implementation.
    • The file must contain the following information:
      • The name of the projection method used to convert between local and global coordinates
      • The parameters of the projection method (depending on the projection method)

    For further information, please refer to the readme of map_projection_loader in Autoware Universe.

    "},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/","title":"Vector Map creation requirement specifications","text":""},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/#vector-map-creation-requirement-specifications","title":"Vector Map creation requirement specifications","text":""},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/#overview","title":"Overview","text":"

    Autoware relies on high-definition point cloud maps and vector maps of the driving environment to perform various tasks such as localization, route planning, traffic light detection, and predicting the trajectories of pedestrians and other vehicles.

    A vector map contains highly accurate information about a road network, lane geometry, and traffic lights. It is required for route planning, traffic light detection, and predicting the trajectories of other vehicles and pedestrians.

    Vector Map uses lanelet2_extension, which is based on the lanelet2 format and extended for Autoware.

    The primitives (basic components) used in Vector Map are explained in Web.Auto Docs - What is Lanelet2. The following Vector Map creation requirement specifications are written on the premise of these knowledge.

    This specification is a set of requirements for the creation of Vector Map(s) to ensure that Autoware drives safely and autonomously as intended by the user. To Create a Lanelet2 format .osm file, please refer to Creating a vector map.

    "},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/#handling-of-the-requirement-specification","title":"Handling of the Requirement Specification","text":"

    Which requirements apply entirely depends on the configuration of the Autoware system on a vehicle. Before creating a Vector Map, it is necessary to clearly determine in advance how you want the vehicle with the implemented system to behave in various environments.

    Next, you must comply with the laws of the country where the autonomous driving vehicle will be operating. It is your responsibility to choose which of the following requirements to apply according to the laws.

    "},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/#caution","title":"Caution","text":"
    • The examples of the road signs and road surface markings are used in Japan. Please replace them with those used in your respective countries.
    • The values for range and distance indicated are minimum values. Please determine values that comply with the laws of your country. Furthermore, these minimum values may change depending on the maximum velocity of the autonomous driving vehicle.
    "},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/#list-of-requirement-specifications","title":"List of Requirement Specifications","text":"Category ID Requirements Category Lane vm-01-01 Lanelet basics vm-01-02 Allowance for lane changes vm-01-03 Linestring sharing vm-01-04 Sharing of the centerline of lanes for opposing traffic vm-01-05 Lane geometry vm-01-06 Line position (1) vm-01-07 Line position (2) vm-01-08 Line position (3) vm-01-09 Speed limits vm-01-10 Centerline vm-01-11 Centerline connection (1) vm-01-12 Centerline connection (2) vm-01-13 Roads with no centerline (1) vm-01-14 Roads with no centerline (2) vm-01-15 Road shoulder vm-01-16 Road shoulder Linestring sharing vm-01-17 Side strip vm-01-18 Side strip Linestring sharing vm-01-19 Walkway Category Stop Line vm-02-01 Stop line alignment vm-02-02 Stop sign Category Intersection vm-03-01 Intersection criteria vm-03-02 Lanelet's turn direction and virtual vm-03-03 Lanelet width in the intersection vm-03-04 Lanelet creation in the intersection vm-03-05 Lanelet division in the intersection vm-03-06 Guide lines in the intersection vm-03-07 Multiple lanelets in the intersection vm-03-08 Intersection Area range vm-03-09 Range of Lanelet in the intersection vm-03-10 Right of way (with signal) vm-03-11 Right of way (without signal) vm-03-12 Right of way supplements vm-03-13 Merging from private area, sidewalk vm-03-14 Road marking vm-03-15 Exclusive bicycle lane Category Traffic Light vm-04-01 Traffic light basics vm-04-02 Traffic light position and size vm-04-03 Traffic light lamps Category Crosswalk vm-05-01 Crosswalks across the road vm-05-02 Crosswalks with pedestrian signals vm-05-03 Deceleration for safety at crosswalks vm-05-04 Fences Category Area vm-06-01 Buffer Zone vm-06-02 No parking signs vm-06-03 No stopping signs vm-06-04 No stopping sections vm-06-05 Detection area Category Others vm-07-01 Vector Map creation range vm-07-02 Range of detecting pedestrians who enter the road vm-07-03 Guardrails, guard pipes, fences vm-07-04 Ellipsoidal height"},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_area/","title":"Category area","text":""},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_area/#categoryarea","title":"Category:Area","text":""},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_area/#vm-06-01-buffer-zone","title":"vm-06-01 Buffer Zone","text":""},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_area/#detail-of-requirements","title":"Detail of requirements","text":"

    Create a Polygon (type:hatched_road_markings) when a Buffer Zone (also known as a zebra zone) is painted on the road surface.

    • If the Buffer Zone is next to a Lanelet, share Points between them.
    • Overlap the Buffer Zone's Polygon with the intersection's Polygon (intersection_area) if the Buffer Zone is located at an intersection.
    "},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_area/#behavior-of-autoware","title":"Behavior of Autoware\uff1a","text":"

    In order to avoid obstacles, Autoware regards the Buffer Zone as a drivable area and proceeds through it.

    "},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_area/#caution","title":"Caution","text":"
    • Vehicles are not allowed to pass through safety areas. It's important to differentiate between Buffer Zones and safety areas. - Do not create a Polygon for the Buffer Zone in areas where static objects like poles are present and vehicles cannot pass, even if a Buffer Zone is painted on the surface. Buffer Zones should be established only in areas where vehicle passage is feasible.
    "},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_area/#preferred-vector-map","title":"Preferred vector map","text":""},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_area/#incorrect-vector-map","title":"Incorrect vector map","text":"

    None in particular.

    "},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_area/#related-autoware-module","title":"Related Autoware module","text":"
    • Static Avoidance - Autoware Universe Documentation
    • Dynamic Avoidance - Autoware Universe Documentation
    "},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_area/#vm-06-02-no-parking-signs","title":"vm-06-02 No parking signs","text":""},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_area/#detail-of-requirements_1","title":"Detail of requirements","text":"

    When creating a Vector Map, you can prohibit parking in specific areas, while temporary stops are permitted.

    Create a reference from a Lanelet (subtype:road) to a Regulatory Element (subtype:no_parking_area), and have this Regulatory Element refer to a Polygon (type:no_parking_area).

    Refer to Web.Auto Documentation - Creation of No Parking Area for the method of creation in Vector Map Builder.

    "},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_area/#behavior-of-autoware_1","title":"Behavior of Autoware\uff1a","text":"

    Since no_parking_area does not allow for setting a goal, Autoware cannot park the vehicle there.

    "},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_area/#preferred-vector-map_1","title":"Preferred vector map","text":""},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_area/#incorrect-vector-map_1","title":"Incorrect vector map","text":"

    None in particular.

    "},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_area/#related-autoware-module_1","title":"Related Autoware module","text":"
    • Goal Planner design - Autoware Universe Documentation
    "},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_area/#vm-06-03-no-stopping-signs","title":"vm-06-03 No stopping signs","text":""},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_area/#detail-of-requirements_2","title":"Detail of requirements","text":"

    When creating a Vector Map, you can prohibit stopping in specific areas, while temporary stops are permitted.

    Create a reference from a Lanelet (subtype:road) to a Regulatory Element (subtype:no_parking_area), and have this Regulatory Element refer to a Polygon (type:no_parking_area).

    Refer to Web.Auto Documentation - Creation of No Parking Area for the method of creation in Vector Map Builder.

    "},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_area/#behavior-of-autoware_2","title":"Behavior of Autoware\uff1a","text":"

    Since no_parking_area does not allow for setting a goal, Autoware cannot park the vehicle there.

    "},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_area/#preferred-vector-map_2","title":"Preferred vector map","text":""},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_area/#incorrect-vector-map_2","title":"Incorrect vector map","text":"

    None in particular.

    "},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_area/#related-autoware-module_2","title":"Related Autoware module","text":"
    • Goal Planner design - Autoware Universe Documentation
    "},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_area/#vm-06-04-no-stopping-sections","title":"vm-06-04 No stopping sections","text":""},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_area/#detail-of-requirements_3","title":"Detail of requirements","text":"

    While vehicles may stop on the road for signals or traffic congestion, you can prohibit any form of stopping (temporary stopping, parking, idling) in specific areas when creating a Vector Map.

    Create a reference from a Lanelet (subtype:road) to a Regulatory Element (subtype:no_stopping_area), and have this Regulatory Element refer to a Polygon (type:no_stopping_area).

    Refer to Web.Auto Documentation - Creation of No Stopping Area for the method of creation in Vector Map Builder.

    "},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_area/#behavior-of-autoware_3","title":"Behavior of Autoware\uff1a","text":"

    The vehicle does not make temporary stops in no_stopping_area. Since goals cannot be set in no_stopping_area, the vehicle cannot park there.

    "},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_area/#preferred-vector-map_3","title":"Preferred vector map","text":""},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_area/#incorrect-vector-map_3","title":"Incorrect vector map","text":"

    None in particular.

    "},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_area/#related-autoware-module_3","title":"Related Autoware module","text":"
    • No Stopping Area design - Autoware Universe Documentation
    "},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_area/#vm-06-05-detection-area","title":"vm-06-05 Detection area","text":""},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_area/#detail-of-requirements_4","title":"Detail of requirements","text":"

    Autoware identifies obstacles by detecting point clouds in the Detection Area, leading to a stop at the stop line and maintaining that stop until the obstacles move away. To enable this response, incorporate the Detection Area element into the Vector Map.

    Create a reference from a Lanelet (subtype:road) to a Regulatory Element (subtype:detection_area), and have this Regulatory Element refer to a Polygon (type:detection_area) and a Linestring (type:stop_line).

    Refer to Web.Auto Documentation - Creation of Detection Area for the method of creation in Vector Map Builder.

    "},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_area/#preferred-vector-map_4","title":"Preferred vector map","text":""},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_area/#incorrect-vector-map_4","title":"Incorrect vector map","text":"

    None in particular.

    "},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_area/#related-autoware-module_4","title":"Related Autoware module","text":"
    • Detection Area - Autoware Universe Documentation
    "},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_crosswalk/","title":"Category crosswalk","text":""},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_crosswalk/#categorycrosswalk","title":"Category:Crosswalk","text":"

    There are two types of requirements for crosswalks, and they can both be applicable to a single crosswalk.

    • vm-05-01 : Crosswalks across the road
    • vm-05-02 : Crosswalks with pedestrian signals

    In the case of crosswalks at intersections, they must also meet the requirements of the intersection.

    "},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_crosswalk/#vm-05-01-crosswalks-across-the-road","title":"vm-05-01 Crosswalks across the road","text":""},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_crosswalk/#detail-of-requirements","title":"Detail of requirements","text":"

    Necessary requirements for creation:

    1. Create a Lanelet for the crosswalk (subtype:crosswalk).
    2. If there is a stop line before the crosswalk, create a Linestring (type:stop_line). Create stop lines for the opposing traffic lane in the same way.
    3. Create a Polygon (type:crosswalk_polygon) to cover the crosswalk.
    4. The Lanelet of the road refers to the regulatory element (subtype:crosswalk), and the regulatory element refers to the created Lanelet, Linestring, and Polygon.
    "},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_crosswalk/#supplemental-information","title":"Supplemental information","text":"
    • Link the regulatory element to the lanelet(s) of the road that intersects with the crosswalk.
    • The stop lines linked to the regulatory element do not necessarily have to exist on the road Lanelets linked with the regulatory element.
    "},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_crosswalk/#behavior-of-autoware","title":"Behavior of Autoware\uff1a","text":"

    When pedestrians or cyclists are on the crosswalk, Autoware will come to a stop before the stop line and wait for them to pass. Once they have cleared the area, Autoware will begin to move forward.

    "},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_crosswalk/#preferred-vector-map","title":"Preferred vector map","text":""},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_crosswalk/#incorrect-vector-map","title":"Incorrect vector map","text":"

    None in particular.

    "},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_crosswalk/#related-autoware-module","title":"Related Autoware module","text":"
    • Crosswalk - Autoware Universe Documentation
    "},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_crosswalk/#vm-05-02-crosswalks-with-pedestrian-signals","title":"vm-05-02 Crosswalks with pedestrian signals","text":""},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_crosswalk/#detail-of-requirements_1","title":"Detail of requirements","text":"

    Necessary requirements for creation:

    • Create a Lanelet (subtype:crosswalk, participant:pedestrian).
    • Create a Traffic Light Linestring. If multiple traffic lights exist, create multiple Linestrings.
      • Linestring
        • type:traffic_light
        • subtype:red_green
        • height:value
    • Ensure the crosswalk's Lanelet references a Regulatory Element (subtype:traffic_light). Also, ensure the Regulatory Element references Linestring (type:traffic_light).

    Refer to vm-04-02 for more about traffic light object.

    "},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_crosswalk/#preferred-vector-map_1","title":"Preferred vector map","text":""},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_crosswalk/#incorrect-vector-map_1","title":"Incorrect vector map","text":"

    None in particular.

    "},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_crosswalk/#related-autoware-module_1","title":"Related Autoware module","text":"
    • Crosswalk - Autoware Universe Documentation
    "},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_crosswalk/#vm-05-03-deceleration-for-safety-at-crosswalks","title":"vm-05-03 Deceleration for safety at crosswalks","text":""},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_crosswalk/#detail-of-requirements_2","title":"Detail of requirements","text":"

    To ensure a constant deceleration to a safe speed when traversing a crosswalk, add the following tags to the crosswalk's Lanelet (subtype:crosswalk):

    • safety_slow_down_speed [m/s]: The maximum velocity while crossing.
    • safety_slow_down_distance [m]: The starting point of the area where the maximum speed applies, measured from the vehicle's front bumper to the crosswalk.
    "},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_crosswalk/#preferred-vector-map_2","title":"Preferred vector map","text":""},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_crosswalk/#incorrect-vector-map_2","title":"Incorrect vector map","text":"

    None in particular.

    "},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_crosswalk/#related-autoware-module_2","title":"Related Autoware module","text":"
    • Crosswalk - Autoware Universe Documentation
    "},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_crosswalk/#vm-05-04-fences","title":"vm-05-04 Fences","text":""},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_crosswalk/#detail-of-requirements_3","title":"Detail of requirements","text":"

    Autoware detects pedestrians and bicycles crossing the crosswalk, as well as those that might cross. However, areas near the crosswalk, such as fenced kindergartens, playgrounds, or parks, where many people are moving, can affect crosswalk detection due to predicted paths of people and bicycles from these areas.

    Surround areas not connected to the crosswalk with Linestring (type:fence), which does not need to be linked to anything.

    However, if there is a guardrail, wall, or fence between the road and sidewalk, with another fence behind it, the second fence may be omitted. Nevertheless, areas around crosswalks are not subject to this omission and must be created without exclusion.

    "},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_crosswalk/#preferred-vector-map_3","title":"Preferred vector map","text":""},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_crosswalk/#incorrect-vector-map_3","title":"Incorrect vector map","text":"

    None in particular.

    "},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_crosswalk/#related-autoware-module_3","title":"Related Autoware module","text":"
    • map_based_prediction - Autoware Universe Documentation
    "},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_intersection/","title":"Category intersection","text":""},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_intersection/#categoryintersection","title":"Category:Intersection","text":""},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_intersection/#vm-03-01-intersection-criteria","title":"vm-03-01 Intersection criteria","text":""},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_intersection/#detail-of-requirements","title":"Detail of requirements","text":"

    Essential criteria for the construction of an intersection:

    • Encircle the drivable area at the intersection with a Polygon (type:intersection_area).
    • Add turn_direction to all Lanelets in the intersection.
    • Ensure that all lanelets in the intersection are tagged:
      • key:intersection_area
      • value: Polygon's ID
    • Attach right_of_way to the necessary Lanelets.
    • Also, it is necessary to appropriately create traffic lights, crosswalks, and stop lines.

    For detailed information, refer to the respective requirements on this page.

    "},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_intersection/#autoware-modules","title":"Autoware modules","text":"
    • The requirements for turn_direction and right_of_way are related to the intersection module, which plans velocity to avoid collisions with other vehicles, taking traffic light instructions into account.
    • The requirements for intersection_area are related to the avoidance module, which plans routes that evade by veering out of lanes in the intersections.
    "},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_intersection/#preferred-vector-map","title":"Preferred vector map","text":"

    None in particular.

    "},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_intersection/#incorrect-vector-map","title":"Incorrect vector map","text":"

    None in particular.

    "},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_intersection/#related-autoware-module","title":"Related Autoware module","text":"
    • Intersection - Autoware Universe Documentation
    • Blind Spot design - Autoware Universe Documentation
    • Static Avoidance - Autoware Universe Documentation
    • Dynamic Avoidance - Autoware Universe Documentation
    "},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_intersection/#vm-03-02-lanelets-turn-direction-and-virtual-linestring","title":"vm-03-02 Lanelet's turn direction and virtual linestring","text":""},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_intersection/#detail-of-requirements_1","title":"Detail of requirements","text":"

    Add the following tag to the Lanelets in the intersection:

    • turn_direction : straight
    • turn_direction : left
    • turn_direction : right

    Also, if the left or right Linestrings of Lanelets at the intersection lack road paintings, designate these as type:virtual.

    "},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_intersection/#behavior-of-autoware","title":"Behavior of Autoware\uff1a","text":"

    Autoware will start flashing the turn signals (blinkers) 30 meters as default before turn_direction-tagged Lanelet. If you change the blinking timing, add the following tags:

    • key: turn_signal_distance
    • value: numerical value (m)

    "},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_intersection/#preferred-vector-map_1","title":"Preferred vector map","text":""},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_intersection/#incorrect-vector-map_1","title":"Incorrect vector map","text":"

    None in particular.

    "},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_intersection/#related-autoware-module_1","title":"Related Autoware module","text":"
    • Intersection - Autoware Universe Documentation
    • Blind Spot design - Autoware Universe Documentation
    • virtual_traffic_light in behavior_velocity_planner - Autoware Universe Documentation
    "},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_intersection/#vm-03-03-lanelet-width-in-the-intersection","title":"vm-03-03 Lanelet width in the intersection","text":""},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_intersection/#detail-of-requirements_2","title":"Detail of requirements\uff1a","text":"

    Lanelets in the intersection should have a consistent width. Additionally, draw Linestrings with smooth curves.

    The shape of this curve must be determined by the Vector Map creator.

    "},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_intersection/#preferred-vector-map_2","title":"Preferred vector map","text":""},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_intersection/#incorrect-vector-map_2","title":"Incorrect vector map","text":""},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_intersection/#vm-03-04-lanelet-creation-in-the-intersection","title":"vm-03-04 Lanelet creation in the intersection","text":""},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_intersection/#detail-of-requirements_3","title":"Detail of requirements","text":"

    Create all Lanelets in the intersection, including lanelets not driven by the vehicle. Additionally, link stop lines and traffic lights to the Lanelets appropriately.

    Refer also to the creation scope vm-07-01

    "},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_intersection/#behavior-of-autoware_1","title":"Behavior of Autoware","text":"

    Autoware uses lanelets to predict the movements of other vehicles and plan the vehicle's velocity accordingly. Therefore, it is necessary to create all lanelets in the intersection.

    "},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_intersection/#preferred-vector-map_3","title":"Preferred vector map","text":""},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_intersection/#incorrect-vector-map_3","title":"Incorrect vector map","text":""},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_intersection/#related-autoware-module_2","title":"Related Autoware module","text":"
    • Intersection - Autoware Universe Documentation
    "},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_intersection/#vm-03-05-lanelet-division-in-the-intersection","title":"vm-03-05 Lanelet division in the intersection","text":""},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_intersection/#detail-of-requirements_4","title":"Detail of requirements","text":"

    Create the Lanelets in the intersection as a single object without dividing them.

    "},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_intersection/#preferred-vector-map_4","title":"Preferred vector map","text":""},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_intersection/#incorrect-vector-map_4","title":"Incorrect vector map","text":""},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_intersection/#related-autoware-module_3","title":"Related Autoware module","text":"
    • Intersection - Autoware Universe Documentation
    "},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_intersection/#vm-03-06-guide-lines-in-the-intersection","title":"vm-03-06 Guide lines in the intersection","text":""},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_intersection/#detail-of-requirements_5","title":"Detail of requirements","text":"

    If there are guide lines in the intersection, draw the Lanelet following them.

    In cases where the Lanelets branches off, begin the branching at the end of the guide line. However, it is not necessary to share points or linestrings between Lanelets.

    "},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_intersection/#preferred-vector-map_5","title":"Preferred vector map","text":""},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_intersection/#incorrect-vector-map_5","title":"Incorrect vector map","text":"

    None in particular.

    "},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_intersection/#related-autoware-module_4","title":"Related Autoware module","text":"
    • Intersection - Autoware Universe Documentation
    "},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_intersection/#vm-03-07-multiple-lanelets-in-the-intersection","title":"vm-03-07 Multiple lanelets in the intersection","text":""},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_intersection/#detail-of-requirements_6","title":"Detail of requirements","text":"

    When connecting multiple lanes with Lanelets at an intersection, those Lanelets should be made adjacent to each other without crossing.

    "},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_intersection/#preferred-vector-map_6","title":"Preferred vector map","text":""},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_intersection/#incorrect-vector-map_6","title":"Incorrect vector map","text":""},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_intersection/#related-autoware-module_5","title":"Related Autoware module","text":"
    • Intersection - Autoware Universe Documentation
    "},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_intersection/#vm-03-08-intersection-area-range","title":"vm-03-08 Intersection area range","text":""},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_intersection/#detail-of-requirements_7","title":"Detail of requirements","text":"

    Encircle the intersection's drivable area with a Polygon (type:intersection_area). The boundary of this intersection's Polygon should be defined by the objects below.

    • Linestrings (subtype:road_border)
    • Straight lines at the connection points of lanelets in the intersection.\"
    "},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_intersection/#preferred-vector-map_7","title":"Preferred vector map","text":""},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_intersection/#incorrect-vector-map_7","title":"Incorrect vector map","text":"

    None in particular.

    "},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_intersection/#related-autoware-module_6","title":"Related Autoware module","text":"
    • Intersection - Autoware Universe Documentation
    • Blind Spot design - Autoware Universe Documentation
    • Static Avoidance - Autoware Universe Documentation
    • Dynamic Avoidance - Autoware Universe Documentation
    "},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_intersection/#vm-03-09-range-of-lanelet-in-the-intersection","title":"vm-03-09 Range of Lanelet in the intersection","text":""},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_intersection/#detail-of-requirements_8","title":"Detail of requirements","text":"

    Determine the start and end positions of lanelets in the intersection (henceforth the boundaries of lanelet connections) based on the stop line's position.

    • For cases with a painted stop line:
      • The stop line's linestring (type:stop_line) position must align with the lanelet's start.
      • Extend the lanelet's end to where the opposing lane's stop line would be.
    • Without a painted stop line:
      • Use a drawn linestring (type:stop_line) to establish positions as if there were a painted stop line.
    "},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_intersection/#preferred-vector-map_8","title":"Preferred vector map","text":""},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_intersection/#incorrect-vector-map_8","title":"Incorrect vector map","text":"

    None in particular.

    "},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_intersection/#related-autoware-module_7","title":"Related Autoware module","text":"
    • Intersection - Autoware Universe Documentation
    "},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_intersection/#vm-03-10-right-of-way-with-signal","title":"vm-03-10 Right of way (with signal)","text":""},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_intersection/#detail-of-requirements_9","title":"Detail of requirements","text":"

    Set the regulatory element 'right_of_way' for Lanelets that meet all of the following criteria:

    • Lanelets in the intersection with a turn_direction of right or left.
    • Lanelets that intersect with the vehicle's lanelet.
    • There are traffic lights at the intersection.

    Set to yield those lanelets in the intersection that intersect the vehicle's lanelet, and set to yield those lanelets that do not share the same signal change timing with the vehicle. Also, if the vehicle is turning left, set the opposing vehicle's right-turn lane to yield. There is no need to set yield for lanelets where the vehicle goes straight (turn_direction:straight).

    "},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_intersection/#preferred-vector-map_9","title":"Preferred vector map","text":""},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_intersection/#the-vehicle-turns-left","title":"The vehicle turns left","text":""},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_intersection/#the-vehicle-turns-right","title":"The vehicle turns right","text":""},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_intersection/#incorrect-vector-map_9","title":"Incorrect vector map","text":"

    None in particular.

    "},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_intersection/#related-autoware-module_8","title":"Related Autoware module","text":"
    • Intersection - Autoware Universe Documentation
    "},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_intersection/#vm-03-11-right-of-way-without-signal","title":"vm-03-11 Right of way (without signal)","text":""},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_intersection/#detail-of-requirements_10","title":"Detail of requirements","text":"

    Set the regulatory element 'right_of_way' for Lanelets that meet all of the following criteria:

    • Lanelets in the intersection with a turn_direction of right or left.
    • Lanelets that intersect with the vehicle's lanelet.
    • There are no traffic lights at the intersection.

    "},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_intersection/#preferred-vector-map_10","title":"Preferred vector map","text":""},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_intersection/#1-the-vehicle-on-the-priority-lane","title":"\u2460 The vehicle on the priority lane","text":""},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_intersection/#2-the-vehicle-on-the-non-priority-lane","title":"\u2461 The vehicle on the non-priority lane","text":"

    A regulatory element is not necessary. However, when the vehicle goes straight, it has relative priority over other vehicles turning right from the opposing non-priority road. Therefore, settings for right_of_way and yield are required in this case.

    "},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_intersection/#incorrect-vector-map_10","title":"Incorrect vector map","text":"

    None in particular.

    "},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_intersection/#related-autoware-module_9","title":"Related Autoware module","text":"
    • Intersection - Autoware Universe Documentation
    "},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_intersection/#vm-03-12-right-of-way-supplements","title":"vm-03-12 Right of way supplements","text":""},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_intersection/#detail-of-requirements_11","title":"Detail of requirements","text":""},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_intersection/#why-its-necessary-to-configure-right_of_way","title":"Why it's necessary to configure 'right_of_way'","text":"

    Without the 'right_of_way' setting, Autoware interprets other lanes intersecting its path as having priority. Therefore, as long as there are other vehicles in the crossing lane, Autoware cannot enter the intersection regardless of signal indications.

    An example of a problem: Even when our signal allows proceeding, our vehicle waits beforehand if other vehicles are waiting at a red light where the opposing lane intersects with a right-turn lane.

    "},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_intersection/#preferred-vector-map_11","title":"Preferred vector map","text":"

    None in particular.

    "},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_intersection/#incorrect-vector-map_11","title":"Incorrect vector map","text":"

    None in particular.

    "},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_intersection/#related-autoware-module_10","title":"Related Autoware module","text":"
    • Intersection - Autoware Universe Documentation
    "},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_intersection/#vm-03-13-merging-from-private-area-sidewalk","title":"vm-03-13 Merging from private area, sidewalk","text":""},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_intersection/#detail-of-requirements_12","title":"Detail of requirements","text":"

    Set location=private for Lanelets within private property.

    When a road, which enters or exits private property, intersects with a sidewalk, create a Lanelet for that sidewalk (subtype:walkway).

    "},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_intersection/#behavior-of-autoware_2","title":"Behavior of Autoware\uff1a","text":"
    • The vehicle stops temporarily before entering the sidewalk.
    • The vehicle comes to a stop before merging onto the public road.
    "},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_intersection/#preferred-vector-map_12","title":"Preferred vector map","text":""},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_intersection/#incorrect-vector-map_12","title":"Incorrect vector map","text":"

    None in particular.

    "},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_intersection/#related-autoware-module_11","title":"Related Autoware module","text":"
    • Intersection - Autoware Universe Documentation
    "},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_intersection/#vm-03-14-road-marking","title":"vm-03-14 Road marking","text":""},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_intersection/#detail-of-requirements_13","title":"Detail of requirements","text":"

    If there is a stop line ahead of the guide lines in the intersection, ensure the following:

    • Create a Lanelet for the guide lines.
    • The Lanelet for the guide lines references a Regulatory Element (subtype:road_marking).
    • The Regulatory Element refers to the stop_line's Linestring.\"

    Refer to Web.Auto Documentation - Creation of Regulatory Element for the method of creation in Vector Map Builder.

    "},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_intersection/#preferred-vector-map_13","title":"Preferred vector map","text":""},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_intersection/#incorrect-vector-map_13","title":"Incorrect vector map","text":"

    None in particular.

    "},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_intersection/#related-autoware-module_12","title":"Related Autoware module","text":"
    • Intersection - Autoware Universe Documentation
    "},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_intersection/#vm-03-15-exclusive-bicycle-lane","title":"vm-03-15 Exclusive bicycle lane","text":""},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_intersection/#detail-of-requirements_14","title":"Detail of requirements","text":"

    If an exclusive bicycle lane exists, create a Lanelet (subtype:road). The section adjoining the road should share a Linestring. For bicycle lanes at intersections, assign a yieldlane designation beneath the _right_of_way for lanes that intersect with the vehicle's left-turn lane. (Refer to vm-03-10 and vm-03-11 for right_of_way).

    In addition, set lane_change = no as OptionalTags.

    "},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_intersection/#behavior-of-autoware_3","title":"Behavior of Autoware\uff1a","text":"

    The blind spot (entanglement check) feature verifies the lanelet(subtype:road) and decides if the vehicle can proceed.

    "},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_intersection/#preferred-vector-map_14","title":"Preferred vector map","text":""},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_intersection/#incorrect-vector-map_14","title":"Incorrect vector map","text":"

    None in particular.

    "},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_intersection/#related-autoware-module_13","title":"Related Autoware module","text":"
    • Blind Spot design - Autoware Universe Documentation
    "},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_lane/","title":"Category lane","text":""},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_lane/#categorylane","title":"Category:Lane","text":""},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_lane/#vm-01-01-lanelet-basics","title":"vm-01-01 Lanelet basics","text":""},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_lane/#detail-of-requirements","title":"Detail of requirements","text":"

    The road's Lanelets must comply with the following requirements.

    • subtype:road
    • location:urban, for public roads
    • Align the Lanelet's direction with the direction of vehicle move. (You can visualize lanelet direction as arrows with Vector Map Builder)
    • Set lane change is allowed or not, according to vm-01-02.
    • Set the Linestring IDs for Lanelet's left_bound and right_bound respectively. See vm-01-03.
    • tag : one_way=yes. Autoware currently does not support no.
    • Connect the Lanelet to another Lanelet, except if it's at the start or end.
    • Position the points (x, y, z) within the Lanelet to align with the PCD Map, ensuring accuracy not only laterally but also in elevation. The height of a Point should be based on the ellipsoidal height (WGS84). Refer to vm-07-04.
    "},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_lane/#preferred-vector-map","title":"Preferred vector map","text":""},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_lane/#vm-01-02-allowance-for-lane-changes","title":"vm-01-02 Allowance for lane changes","text":""},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_lane/#detail-of-requirements_1","title":"Detail of requirements","text":"

    Add a tag to the Lanelet's Linestring indicating lane change permission or prohibition.

    • Permit lane_change=yes
    • Prohibit lane_change=no

    Set the Linestring subtype according to the type of line.

    • solid
    • dashed
    "},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_lane/#referenced-from-japans-road-traffic-law","title":"Referenced from Japan's Road Traffic Law","text":"
    • White dashed lines : indicate that lane changes and overtaking are permitted.
    • White solid lines : indicate that changing lanes and overtaking are allowed.
    • Yellow solid lines : mean no lane changes are allowed.
    "},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_lane/#related-autoware-module","title":"Related Autoware module","text":"
    • Lane Change design - Autoware Universe Documentation
    • Static Avoidance - Autoware Universe Documentation
    • Dynamic Avoidance - Autoware Universe Documentation
    • Out of lane design - Autoware Universe Documentation
    "},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_lane/#vm-01-03-linestring-sharing","title":"vm-01-03 Linestring sharing","text":""},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_lane/#detail-of-requirements_2","title":"Detail of requirements","text":"

    Share the Linestring when creating Lanelets that are physically adjacent to others.

    "},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_lane/#behavior-of-autoware","title":"Behavior of Autoware","text":"

    If the Lanelet adjacent to the one the vehicle is driving on shares a Linestring, the following behaviors become possible:

    • The vehicle moves out of their lanes to avoid obstacles.
    • The vehicle turns a curve while slightly extending out of the lane.
    • Lane changes

    "},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_lane/#preferred-vector-map_1","title":"Preferred vector map","text":""},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_lane/#incorrect-vector-map","title":"Incorrect vector map","text":""},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_lane/#related-autoware-module_1","title":"Related Autoware module","text":"
    • Lane Change design - Autoware Universe Documentation
    • Static Avoidance - Autoware Universe Documentation
    • Dynamic Avoidance - Autoware Universe Documentation
    • Out of lane design - Autoware Universe Documentation
    "},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_lane/#vm-01-04-sharing-of-the-centerline-of-lanes-for-opposing-traffic","title":"vm-01-04 Sharing of the centerline of lanes for opposing traffic","text":""},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_lane/#detail-of-requirements_3","title":"Detail of requirements","text":"

    When the vehicle's lanelet and the opposing lanelet physically touch, the road center line's Linestring ID must be shared between these two Lanelets. For that purpose, the lengths of those two Lanelets must match.

    "},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_lane/#behavior-of-autoware_1","title":"Behavior of Autoware\uff1a","text":"

    Obstacle avoidance across the opposing lane is possible.

    "},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_lane/#preferred-vector-map_2","title":"Preferred vector map","text":""},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_lane/#incorrect-vector-map_1","title":"Incorrect vector map","text":""},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_lane/#vm-01-05-lane-geometry","title":"vm-01-05 Lane geometry","text":""},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_lane/#detail-of-requirements_4","title":"Detail of requirements","text":"

    The geometry of the road lanelet needs to comply with the following:

    • The left and right Linestrings must follow the road's boundary lines.
    • The lines of a Lanelet, which join with lanelets ahead and behind it, must form straight lines.
    • Ensure the outline is smooth and not jagged or bumpy, except for L-shaped cranks.

    "},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_lane/#preferred-vector-map_3","title":"Preferred vector map","text":""},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_lane/#incorrect-vector-map_2","title":"Incorrect vector map","text":""},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_lane/#vm-01-06-line-position-1","title":"vm-01-06 Line position (1)","text":""},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_lane/#detail-of-requirements_5","title":"Detail of requirements","text":"

    Ensure the road's center line Linestring is located in the exact middle of the road markings.

    "},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_lane/#preferred-vector-map_4","title":"Preferred vector map","text":""},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_lane/#incorrect-vector-map_3","title":"Incorrect vector map","text":""},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_lane/#vm-01-07-line-position-2","title":"vm-01-07 Line position (2)","text":""},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_lane/#detail-of-requirements_6","title":"Detail of requirements","text":"

    Place the Linestring at the center of the markings when lines exist outside the road.

    "},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_lane/#preferred-vector-map_5","title":"Preferred vector map","text":""},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_lane/#incorrect-vector-map_4","title":"Incorrect vector map","text":"

    None in particular.

    "},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_lane/#vm-01-08-line-position-3","title":"vm-01-08 Line position (3)","text":""},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_lane/#detail-of-requirements_7","title":"Detail of requirements","text":"

    If there are no lines on the outer side within the road, position the Linestring 0.5 m from the road's edge.

    "},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_lane/#caution","title":"Caution","text":"

    The width depends on the laws of your country.

    "},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_lane/#preferred-vector-map_6","title":"Preferred vector map","text":""},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_lane/#incorrect-vector-map_5","title":"Incorrect vector map","text":"

    None in particular.

    "},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_lane/#vm-01-09-speed-limits","title":"vm-01-09 Speed limits","text":""},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_lane/#detail-of-requirements_8","title":"Detail of requirements","text":"

    In the following cases, add a speed limit (tag:speed_limit) to the Lanelet (subtype:road) the vehicle is driving on, in km/h.

    • A speed limit road sign exists.
    • You can add a speed limit, for example, on narrow roads.

    Note that the following is achieved through Autoware's settings and behavior.

    • Vehicle's maximum velocity
    • Speed adjustment at places requiring deceleration, like curves and downhill areas.

    "},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_lane/#preferred-vector-map_7","title":"Preferred vector map","text":""},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_lane/#incorrect-vector-map_6","title":"Incorrect vector map","text":"

    None in particular.

    "},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_lane/#vm-01-10-centerline","title":"vm-01-10 Centerline","text":""},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_lane/#detail-of-requirements_9","title":"Detail of requirements","text":"

    Autoware is designed to move through the midpoint calculated from a Lanelet's left and right Linestrings.

    Create a centerline for the Lanelet when there is a need to shift the driving position to the left or right due to certain circumstances, ensuring the centerline has a smooth shape for drivability.

    "},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_lane/#caution_1","title":"Caution","text":"

    'Centerline' is a distinct concept from the central lane division line (centerline).

    "},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_lane/#preferred-vector-map_8","title":"Preferred vector map","text":""},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_lane/#incorrect-vector-map_7","title":"Incorrect vector map","text":""},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_lane/#vm-01-11-centerline-connection-1","title":"vm-01-11 Centerline connection (1)","text":""},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_lane/#detail-of-requirements_10","title":"Detail of requirements","text":"

    When center lines have been added to several Lanelets, they should be connected.

    "},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_lane/#preferred-vector-map_9","title":"Preferred vector map","text":""},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_lane/#incorrect-vector-map_8","title":"Incorrect vector map","text":""},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_lane/#vm-01-12-centerline-connection-2","title":"vm-01-12 Centerline connection (2)","text":""},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_lane/#detail-of-requirements_11","title":"Detail of requirements","text":"

    If a Lanelet with an added centerline is connected to Lanelets without one, ensure the start and end points of the added centerline are positioned at the Lanelet's center. Ensure the centerline has a smooth shape for drivability.

    "},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_lane/#preferred-vector-map_10","title":"Preferred vector map","text":""},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_lane/#incorrect-vector-map_9","title":"Incorrect vector map","text":""},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_lane/#vm-01-13-roads-with-no-centerline-1","title":"vm-01-13 Roads with no centerline (1)","text":""},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_lane/#detail-of-requirements_12","title":"Detail of requirements","text":"

    When a road lacks a central line but is wide enough for one's vehicle and oncoming vehicles to pass each other, Lanelets should be positioned next to each other at the center of the road.

    "},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_lane/#preferred-vector-map_11","title":"Preferred vector map","text":""},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_lane/#incorrect-vector-map_10","title":"Incorrect vector map","text":"

    None in particular.

    "},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_lane/#vm-01-14-roads-with-no-centerline-2","title":"vm-01-14 Roads with no centerline (2)","text":""},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_lane/#detail-of-requirements_13","title":"Detail of requirements","text":"

    Apply if all the next conditions are satisfied:

    • The road is a single lane without a central line and is too narrow for one's vehicle and an oncoming vehicle to pass each other.
    • It is an environment where no vehicles other than the autonomous vehicle enter this road.
    • The plan involves autonomous vehicles operating forth and back on this road.

    Requirement for Vector Map creation:

    • Stack two Lanelets together.
    "},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_lane/#supplementary-information","title":"Supplementary information","text":"
    • The application of this case depends on local operational policies and vehicle specifications, and should be determined in discussion with the map requestor.
    • The current Autoware does not possess the capability to pass oncoming vehicles in shared lanes.
    "},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_lane/#preferred-vector-map_12","title":"Preferred vector map","text":""},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_lane/#incorrect-vector-map_11","title":"Incorrect vector map","text":""},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_lane/#vm-01-15-road-shoulder","title":"vm-01-15 Road Shoulder","text":""},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_lane/#detail-of-requirements_14","title":"Detail of requirements","text":"

    If there is a shoulder next to the road, place the lanelet for the road shoulder (subtype:road_shoulder). However, it is not necessary to create this within intersections.

    The road shoulder's Lanelet and sidewalk's Lanelet share the Linestring (subtype:road_border).

    There must not be a road shoulder Lanelet next to another road shoulder Lanelet.

    A road Lanelet must be next to the shoulder Lanelet.

    "},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_lane/#behavior-of-autoware_2","title":"Behavior of Autoware","text":"
    • Autoware can start from the shoulder and also reach the shoulder.
    • The margin for moving to the edge upon arrival is determined by the Autoware parameter margin_from_boundary. It does not need to be considered when creating the Vector Map.
    • Autoware does not park on the road shoulder lanelet if it overlaps with any of the following:
      • A Polygon marked as no_parking_area
      • A Polygon marked as no_stopping_area
      • Areas near intersection and in the intersection
      • Crosswalk

    tag:lane_change=yes is not required on the Linestring marking the boundary of the shoulder.

    "},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_lane/#preferred-vector-map_13","title":"Preferred vector map","text":""},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_lane/#incorrect-vector-map_12","title":"Incorrect vector map","text":"

    Do not create a road shoulder Lanelet for roads without a shoulder.

    "},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_lane/#related-autoware-module_2","title":"Related Autoware module","text":"
    • Static Avoidance - Autoware Universe Documentation
    • Dynamic Avoidance - Autoware Universe Documentation
    • Goal Planner design - Autoware Universe Documentation
    "},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_lane/#vm-01-16-road-shoulder-linestring-sharing","title":"vm-01-16 Road shoulder Linestring sharing","text":""},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_lane/#detail-of-requirements_15","title":"Detail of requirements","text":"

    The Lanelets for the road shoulder and the adjacent road should have a common Linestring.

    "},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_lane/#preferred-vector-map_14","title":"Preferred vector map","text":""},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_lane/#incorrect-vector-map_13","title":"Incorrect vector map","text":"

    None in particular.

    "},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_lane/#related-autoware-module_3","title":"Related Autoware module","text":"
    • Static Avoidance - Autoware Universe Documentation
    • Dynamic Avoidance - Autoware Universe Documentation
    • Goal Planner design - Autoware Universe Documentation
    "},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_lane/#vm-01-17-side-strip","title":"vm-01-17 Side strip","text":""},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_lane/#detail-of-requirements_16","title":"Detail of requirements","text":"

    Place a Lanelet (subtype:pedestrian_lane) on the side strip. However, it is not necessary to create this within intersections.

    The side strip's Lanelet must have the Linestring (subtype:road_border) outside.

    "},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_lane/#preferred-vector-map_15","title":"Preferred vector map","text":""},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_lane/#incorrect-vector-map_14","title":"Incorrect vector map","text":"

    None in particular.

    "},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_lane/#vm-01-18-side-strip-linestring-sharing","title":"vm-01-18 Side strip Linestring sharing","text":""},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_lane/#detail-of-requirements_17","title":"Detail of requirements","text":"

    The Lanelet for the side strip and the adjacent road Lanelet should have a common Linestring.

    "},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_lane/#preferred-vector-map_16","title":"Preferred vector map","text":""},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_lane/#incorrect-vector-map_15","title":"Incorrect vector map","text":"

    None in particular.

    "},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_lane/#vm-01-19-sidewalk","title":"vm-01-19 sidewalk","text":""},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_lane/#detail-of-requirements_18","title":"Detail of requirements","text":"

    Place a sidewalk Lanelet (subtype:walkway) where necessary. However, install only when there is a crosswalk intersecting the vehicle's lane. Do not install if there is no intersection.

    The length of the lanelet (subtype:walkway) should be the area intersecting with your lane and additional 3 meters before and after.

    "},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_lane/#preferred-vector-map_17","title":"Preferred vector map","text":""},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_lane/#incorrect-vector-map_16","title":"Incorrect vector map","text":"

    None in particular.

    "},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_lane/#related-autoware-module_4","title":"Related Autoware module","text":"
    • Intersection - Autoware Universe Documentation
    • Walkway design- Autoware Universe Documentation
    "},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_others/","title":"Category others","text":""},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_others/#categoryothers","title":"Category:Others","text":""},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_others/#vm-07-01-vector-map-creation-range","title":"vm-07-01 Vector Map creation range","text":""},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_others/#detail-of-requirements","title":"Detail of requirements","text":"

    Create all Lanelets within the sensor range of the vehicle, even those on roads not driven by the vehicle, including Lanelets that intersect with the vehicle's Lanelet.

    However, if the following conditions are met, the range you must create lanelets is 10 meters at least.

    • The vehicle drives on the priority lane through the intersection without traffic lights.
    • The vehicle drives straight or turn left through the intersection with traffic lights

    Refer to vm-03-04 for more about intersection requirements.

    "},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_others/#behavior-of-autoware","title":"Behavior of Autoware\uff1a","text":"

    Autoware detects approaching vehicles and plans a route to avoid collisions.

    "},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_others/#caution","title":"Caution","text":"

    Check the range of sensors on your vehicle.

    "},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_others/#preferred-vector-map","title":"Preferred vector map","text":""},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_others/#incorrect-vector-map","title":"Incorrect vector map","text":""},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_others/#vm-07-02-range-of-detecting-pedestrians-who-enter-the-road","title":"vm-07-02 Range of detecting pedestrians who enter the road","text":""},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_others/#detail-of-requirements_1","title":"Detail of requirements","text":"

    Autoware's feature for detecting sudden entries from the roadside tracks pedestrians and cyclists beyond the road boundaries, decelerating to prevent collisions when emergence into the road is likely.

    Setting up a linestring of the following type instructs Autoware to disregard those positioned outside the line as not posing pop-out risks.

    • guard_rail
    • wall
    • fence
    "},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_others/#preferred-vector-map_1","title":"Preferred vector map","text":""},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_others/#incorrect-vector-map_1","title":"Incorrect vector map","text":"

    None in particular.

    "},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_others/#related-autoware-module","title":"Related Autoware module","text":"
    • map_based_prediction - Autoware Universe Documentation
    "},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_others/#vm-07-03-guardrails-guard-pipes-fences","title":"vm-07-03 Guardrails, guard pipes, fences","text":""},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_others/#detail-of-requirements_2","title":"Detail of requirements","text":"

    When creating a Linestring for guardrails or guard pipes (type: guard_rail), position it at the point where the most protruding part on the roadway side is projected vertically onto the ground.

    Follow the same position guidelines for Linestrings of fences (type:fence).

    "},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_others/#preferred-vector-map_2","title":"Preferred vector map","text":""},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_others/#incorrect-vector-map_2","title":"Incorrect vector map","text":""},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_others/#related-autoware-module_1","title":"Related Autoware module","text":"
    • Drivable Area design - Autoware Universe Documentation
    "},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_others/#vm-07-04-ellipsoidal-height","title":"vm-07-04 Ellipsoidal height","text":""},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_others/#detail-of-requirements_3","title":"Detail of requirements","text":"

    The height of a Point should be based on the ellipsoidal height (WGS84), in meters.

    "},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_others/#preferred-vector-map_3","title":"Preferred vector map","text":"

    The height of a Point is the distance from the ellipsoidal surface to the ground.

    "},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_others/#incorrect-vector-map_3","title":"Incorrect vector map","text":"

    The height of a Point is Orthometric height, the distance from the Geoid to the ground.

    "},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_stop_line/","title":"Category stop line","text":""},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_stop_line/#categorystop-line","title":"Category:Stop Line","text":""},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_stop_line/#vm-02-01-stop-line-alignment","title":"vm-02-01 Stop line alignment","text":""},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_stop_line/#detail-of-requirements","title":"Detail of requirements","text":"

    Place the Linestring (type:stop_line) for the stop line on the edge on the side before the white line.

    Refer to Web.Auto Documentation - Creation and edit of a stop point (StopPoint) for the method of creation in Vector Map Builder.

    "},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_stop_line/#preferred-vector-map","title":"Preferred vector map","text":""},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_stop_line/#incorrect-vector-map","title":"Incorrect vector map","text":""},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_stop_line/#vm-02-02-stop-sign","title":"vm-02-02 Stop sign","text":""},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_stop_line/#detail-of-requirements_1","title":"Detail of requirements","text":"

    Where there is no stop line on the road but a stop sign exists, place a Linestring as the stop line next to the sign.

    Create a reference from a Lanelet (subtype:road) to a Regulatory Element (subtype:traffic_sign), and have this Regulatory Element refer to a Linestring (type:stop_line) and a Linestring (type:traffic_sign, subtype:stop_sign).

    "},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_stop_line/#preferred-vector-map_1","title":"Preferred vector map","text":""},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_stop_line/#incorrect-vector-map_1","title":"Incorrect vector map","text":"

    None in particular.

    "},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_stop_line/#related-autoware-module","title":"Related Autoware module","text":"
    • Stop Line design - Autoware Universe Documentation
    "},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_traffic_light/","title":"Category traffic light","text":""},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_traffic_light/#categorytraffic-light","title":"Category:Traffic Light","text":""},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_traffic_light/#vm-04-01-traffic-light-basics","title":"vm-04-01 Traffic light basics","text":""},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_traffic_light/#detail-of-requirements","title":"Detail of requirements","text":"

    When creating traffic lights in a vector map, meet the following requirements:

    • Road Lanelet (subtype:road). Quantity: one.
    • Traffic Light. Multiple instances possible.
      • Traffic light Linestring (type:traffic_light).
      • Traffic light bulbs Linestring (type:light_bulbs).
      • Stop line Linestring (type:stop_line).
    • Regulatory element for traffic lights (subtype:traffic_light). Referenced by the road Lanelet and references both the traffic light (traffic_light, light_bulbs) and stop line (stop_line). Quantity: one.

    Refer to Web.Auto Documentation - Creation of a traffic light and a stop line for the method of creation in Vector Map Builder.

    Refer to vm-04-02 and vm-04-03 for the specifications of traffic light and traffic light bulb objects.

    "},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_traffic_light/#preferred-vector-map","title":"Preferred vector map","text":"

    If there is a crosswalk at the intersection, arrange for the road's Lanelet and the crosswalk's Lanelet to intersect and overlap.

    "},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_traffic_light/#related-autoware-module","title":"Related Autoware module","text":"
    • Traffic Light design - Autoware Universe Documentation
    "},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_traffic_light/#vm-04-02-traffic-light-position-and-size","title":"vm-04-02 Traffic light position and size","text":""},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_traffic_light/#detail-of-requirements_1","title":"Detail of requirements","text":"

    Create traffic lights with Linestring.

    • type:traffic_light
    • subtype:red_yellow_green (optional)

    Create the Linestring's length (from start to end points) precisely aligned with the traffic light's bottom edge. Ensure the traffic light's positional height is correctly represented in the Linestring's 3D coordinates.

    Use tag:height for the traffic light's height, e.g., for 50cm, write tag:height=0.5. Note that this height indicates the size of the traffic light, not its position.

    "},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_traffic_light/#supplemental-information","title":"Supplemental information","text":"

    Autoware currently ignores subtype red_yellow_green.

    "},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_traffic_light/#preferred-vector-map_1","title":"Preferred vector map","text":""},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_traffic_light/#incorrect-vector-map","title":"Incorrect vector map","text":"

    None in particular.

    "},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_traffic_light/#related-autoware-module_1","title":"Related Autoware module","text":"
    • Traffic Light design - Autoware Universe Documentation
    "},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_traffic_light/#vm-04-03-traffic-light-lamps","title":"vm-04-03 Traffic light lamps","text":""},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_traffic_light/#detail-of-requirements_2","title":"Detail of requirements","text":"

    To enable the system to detect the color of traffic lights, the color scheme and arrangement must be accurately created as objects. Indicate the position of the lights with Points. For colored lights, use the color tag to represent the color. For arrow lights, use the arrow tag to indicate the direction.

    • tag: color = red, yellow, green
    • tag: arrow = up, right, left, up_light, up_left

    Use the Points of the lights when creating a Linestring.

    • type: light_bulbs
    "},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_traffic_light/#preferred-vector-map_2","title":"Preferred vector map","text":"

    The order of the lights' Points can be 1\u21922\u21923\u21924 or 4\u21923\u21922\u21921, either is acceptable.

    "},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_traffic_light/#incorrect-vector-map_1","title":"Incorrect vector map","text":"

    None in particular.

    "},{"location":"design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_traffic_light/#related-autoware-module_2","title":"Related Autoware module","text":"
    • Traffic Light design - Autoware Universe Documentation
    "},{"location":"design/autoware-architecture/node-diagram/","title":"Node diagram","text":""},{"location":"design/autoware-architecture/node-diagram/#node-diagram","title":"Node diagram","text":"

    This page depicts the node diagram designs for Autoware Core/Universe architecture.

    "},{"location":"design/autoware-architecture/node-diagram/#autoware-core","title":"Autoware Core","text":"

    TBD.

    "},{"location":"design/autoware-architecture/node-diagram/#autoware-universe","title":"Autoware Universe","text":"

    Open in draw.io for fullscreen

    Note that the diagram is for reference. We are planning to update this diagram every release and may have old information between the releases. If you wish to check the latest node diagram use rqt_graph after launching the Autoware.

    "},{"location":"design/autoware-architecture/perception/","title":"Perception Component Design","text":""},{"location":"design/autoware-architecture/perception/#perception-component-design","title":"Perception Component Design","text":""},{"location":"design/autoware-architecture/perception/#purpose-of-this-document","title":"Purpose of this document","text":"

    This document outlines the high-level design strategies, goals and related rationales in the development of the Perception Component. Through this document, it is expected that all OSS developers will comprehend the design philosophy, goals and constraints under which the Perception Component is designed, and participate seamlessly in the development.

    "},{"location":"design/autoware-architecture/perception/#overview","title":"Overview","text":"

    The Perception Component receives inputs from Sensing, Localization, and Map components, and adds semantic information (e.g., Object Recognition, Obstacle Segmentation, Traffic Light Recognition, Occupancy Grid Map), which is then passed on to Planning Component. This component design follows the overarching philosophy of Autoware, defined as the microautonomy concept.

    "},{"location":"design/autoware-architecture/perception/#goals-and-non-goals","title":"Goals and non-goals","text":"

    The role of the Perception Component is to recognize the surrounding environment based on the data obtained through Sensing and acquire sufficient information (such as the presence of dynamic objects, stationary obstacles, blind spots, and traffic signal information) to enable autonomous driving.

    In our overall design, we emphasize the concept of microautonomy architecture. This term refers to a design approach that focuses on the proper modularization of functions, clear definition of interfaces between these modules, and as a result, high expandability of the system. Given this context, the goal of the Perception Component is set not to solve every conceivable complex use case (although we do aim to support basic ones), but rather to provide a platform that can be customized to the user's needs and can facilitate the development of additional features.

    To clarify the design concepts, the following points are listed as goals and non-goals.

    Goals:

    • To provide the basic functions so that a simple ODD can be defined.
    • To achieve a design that can provide perception functionality to every autonomous vehicle.
    • To be extensible with the third-party components.
    • To provide a platform that enables Autoware users to develop the complete functionality and capability.
    • To provide a platform that enables Autoware users to develop the autonomous driving system which always outperforms human drivers.
    • To provide a platform that enables Autoware users to develop the autonomous driving system achieving \"100% accuracy\" or \"error-free recognition\".

    Non-goals:

    • To develop the perception component architecture specialized for specific / limited ODDs.
    • To achieve the complete functionality and capability.
    • To outperform the recognition capability of human drivers.
    • To achieve \"100% accuracy\" or \"error-free recognition\".
    "},{"location":"design/autoware-architecture/perception/#high-level-architecture","title":"High-level architecture","text":"

    This diagram describes the high-level architecture of the Perception Component.

    The Perception Component consists of the following sub-components:

    • Object Recognition: Recognizes dynamic objects surrounding the ego vehicle in the current frame, objects that were not present during map creation, and predicts their future trajectories. This includes:
      • Pedestrians
      • Cars
      • Trucks/Buses
      • Bicycles
      • Motorcycles
      • Animals
      • Traffic cones
      • Road debris: Items such as cardboard, oil drums, trash cans, wood, etc., either dropped on the road or floating in the air
    • Obstacle Segmentation: Identifies point clouds originating from obstacles, including both dynamic objects and static obstacles that requires the ego vehicle either steer clear of them or come to a stop in front of the obstacles.
      • This includes:
        • All dynamic objects (as listed above)
        • Curbs/Bollards
        • Barriers
        • Trees
        • Walls/Buildings
      • This does not include:
        • Grass
        • Water splashes
        • Smoke/Vapor
        • Newspapers
        • Plastic bags
    • Occupancy Grid Map: Detects blind spots (areas where no information is available and where dynamic objects may jump out).
    • Traffic Light Recognition: Recognizes the colors of traffic lights and the directions of arrow signals.
    "},{"location":"design/autoware-architecture/perception/#component-interface","title":"Component interface","text":"

    The following describes the input/output concept between Perception Component and other components. See the Perception Component Interface page for the current implementation.

    "},{"location":"design/autoware-architecture/perception/#input-to-the-perception-component","title":"Input to the Perception Component","text":"
    • From Sensing: This input should provide real-time information about the environment.
      • Camera Image: Image data obtained from the camera.
      • Point Cloud: Point Cloud data obtained from LiDAR.
      • Radar Object: Object data obtained from radar.
    • From Localization: This input should provide real-time information about the ego vehicle.
      • Vehicle motion information: Includes the ego vehicle's position.
    • From Map: This input should provide real-time information about the static information about the environment.
      • Vector Map: Contains all static information about the environment, including lane area information.
      • Point Cloud Map: Contains static point cloud maps, which should not include information about the dynamic objects.
    • From API:
      • V2X information: The information from V2X modules. For example, the information from traffic signals.
    "},{"location":"design/autoware-architecture/perception/#output-from-the-perception-component","title":"Output from the Perception Component","text":"
    • To Planning
      • Dynamic Objects: Provides real-time information about objects that cannot be known in advance, such as pedestrians and other vehicles.
      • Obstacle Segmentation: Supplies real-time information about the location of obstacles, which is more primitive than Detected Object.
      • Occupancy Grid Map: Offers real-time information about the presence of occluded area information.
      • Traffic Light Recognition result: Provides the current state of each traffic light in real time.
    "},{"location":"design/autoware-architecture/perception/#how-to-add-new-modules-wip","title":"How to add new modules (WIP)","text":"

    As mentioned in the goal session, this perception module is designed to be extensible by third-party components. For specific instructions on how to add new modules and expand its functionality, please refer to the provided documentation or guidelines (WIP).

    "},{"location":"design/autoware-architecture/perception/#supported-functions","title":"Supported Functions","text":"Feature Description Requirements LiDAR DNN based 3D detector This module takes point clouds as input and detects objects such as vehicles, trucks, buses, pedestrians, and bicycles. - Point Clouds Camera DNN based 2D detector This module takes camera images as input and detects objects such as vehicles, trucks, buses, pedestrians, and bicycles in the two-dimensional image space. It detects objects within image coordinates and providing 3D coordinate information is not mandatory. - Camera Images LiDAR Clustering This module performs clustering of point clouds and shape estimation to achieve object detection without labels. - Point Clouds Semi-rule based detector This module detects objects using information from both images and point clouds, and it consists of two components: LiDAR Clustering and Camera DNN based 2D detector. - Output from Camera DNN based 2D detector and LiDAR Clustering Radar based 3D detector This module takes radar data as input and detects dynamic 3D objects. In detail, please see this document. - Radar data Object Merger This module integrates results from various detectors. - Detected Objects Interpolator This module stabilizes the object detection results by maintaining long-term detection results using Tracking results. - Detected Objects - Tracked Objects Tracking This module gives ID and estimate velocity to the detection results. - Detected Objects Prediction This module predicts the future paths (and their probabilities) of dynamic objects according to the shape of the map and the surrounding environment. - Tracked Objects - Vector Map Obstacle Segmentation This module identifies point clouds originating from obstacles that the ego vehicle should avoid. - Point Clouds - Point Cloud Map Occupancy Grid Map This module detects blind spots (areas where no information is available and where dynamic objects may jump out). - Point Clouds - Point Cloud Map Traffic Light Recognition This module detects the position and state of traffic signals. - Camera Images - Vector Map"},{"location":"design/autoware-architecture/perception/#reference-implementation","title":"Reference Implementation","text":"

    When Autoware is launched, the default parameters are loaded, and the Reference Implementation is started. For more details, please refer to the Reference Implementation.

    "},{"location":"design/autoware-architecture/perception/reference_implementation/","title":"Perception Component Reference Implementation Design","text":""},{"location":"design/autoware-architecture/perception/reference_implementation/#perception-component-reference-implementation-design","title":"Perception Component Reference Implementation Design","text":""},{"location":"design/autoware-architecture/perception/reference_implementation/#purpose-of-this-document","title":"Purpose of this document","text":"

    This document outlines detailed design of the reference imprementations. This allows developers and users to understand what is currently available with the Perception Component, how to utilize, expand, or add to its features.

    "},{"location":"design/autoware-architecture/perception/reference_implementation/#architecture","title":"Architecture","text":"

    This diagram describes the architecture of the reference implementation.

    The Perception component consists of the following sub-components:

    • Obstacle Segmentation: Identifies point clouds originating from obstacles(not only dynamic objects but also static obstacles that should be avoided, such as stationary obstacles) that the ego vehicle should avoid. For example, construction cones are recognized using this module.
    • Occupancy Grid Map: Detects blind spots (areas where no information is available and where dynamic objects may jump out).
    • Object Recognition: Recognizes dynamic objects surrounding the ego vehicle in the current frame and predicts their future trajectories.
      • Detection: Detects the pose and velocity of dynamic objects such as vehicles and pedestrians.
        • Detector: Triggers object detection processing frame by frame.
        • Interpolator: Maintains stable object detection. Even if the output from Detector suddenly becomes unavailable, Interpolator uses the output from the Tracking module to maintain the detection results without missing any objects.
      • Tracking: Associates detected results across multiple frames.
      • Prediction: Predicts trajectories of dynamic objects.
    • Traffic Light Recognition: Recognizes the colors of traffic lights and the directions of arrow signals.
    "},{"location":"design/autoware-architecture/perception/reference_implementation/#internal-interface-in-the-perception-component","title":"Internal interface in the perception component","text":"
    • Obstacle Segmentation to Object Recognition
      • Point Cloud: A Point Cloud observed in the current frame, where the ground and outliers are removed.
    • Obstacle Segmentation to Occupancy Grid Map
      • Ground filtered Point Cloud: A Point Cloud observed in the current frame, where the ground is removed.
    • Occupancy Grid Map to Obstacle Segmentation
      • Occupancy Grid Map: This is used for filtering outlier.
    "},{"location":"design/autoware-architecture/perception/reference-implementations/radar-based-3d-detector/faraway-object-detection/","title":"Radar faraway dynamic objects detection with radar objects","text":""},{"location":"design/autoware-architecture/perception/reference-implementations/radar-based-3d-detector/faraway-object-detection/#radar-faraway-dynamic-objects-detection-with-radar-objects","title":"Radar faraway dynamic objects detection with radar objects","text":""},{"location":"design/autoware-architecture/perception/reference-implementations/radar-based-3d-detector/faraway-object-detection/#overview","title":"Overview","text":"

    This diagram describes the pipeline for radar faraway dynamic object detection.

    "},{"location":"design/autoware-architecture/perception/reference-implementations/radar-based-3d-detector/faraway-object-detection/#reference-implementation","title":"Reference implementation","text":""},{"location":"design/autoware-architecture/perception/reference-implementations/radar-based-3d-detector/faraway-object-detection/#crossing-filter","title":"Crossing filter","text":"
    • radar_crossing_objects_noise_filter

    This package can filter the noise objects crossing to the ego vehicle, which are most likely ghost objects.

    "},{"location":"design/autoware-architecture/perception/reference-implementations/radar-based-3d-detector/faraway-object-detection/#velocity-filter","title":"Velocity filter","text":"
    • object_velocity_splitter

    Static objects include many noise like the objects reflected from ground. In many cases for radars, dynamic objects can be detected stably. To filter out static objects, object_velocity_splitter can be used.

    "},{"location":"design/autoware-architecture/perception/reference-implementations/radar-based-3d-detector/faraway-object-detection/#range-filter","title":"Range filter","text":"
    • object_range_splitter

    For some radars, ghost objects sometimes occur for near objects. To filter these objects, object_range_splitter can be used.

    "},{"location":"design/autoware-architecture/perception/reference-implementations/radar-based-3d-detector/faraway-object-detection/#vector-map-filter","title":"Vector map filter","text":"
    • object-lanelet-filter

    In most cases, vehicles drive in drivable are. To filter objects that are out of drivable area, object-lanelet-filter can be used. object-lanelet-filter filter objects that are out of drivable area defined by vector map.

    Note that if you use object-lanelet-filter for radar faraway detection, you need to define drivable area in a vector map other than the area where autonomous car run.

    "},{"location":"design/autoware-architecture/perception/reference-implementations/radar-based-3d-detector/faraway-object-detection/#radar-object-clustering","title":"Radar object clustering","text":"
    • radar_object_clustering

    This package can combine multiple radar detections from one object into one and adjust class and size. It can suppress splitting objects in tracking module.

    "},{"location":"design/autoware-architecture/perception/reference-implementations/radar-based-3d-detector/faraway-object-detection/#note","title":"Note","text":""},{"location":"design/autoware-architecture/perception/reference-implementations/radar-based-3d-detector/faraway-object-detection/#parameter-tuning","title":"Parameter tuning","text":"

    Detection performed only by Radar applies various strong noise processing. Therefore, there is a trade-off that if you strengthen the noise processing, things that you originally wanted to detect will disappear, and if you weaken it, your self-driving system will be unable to start because the object will be in front of you all the time due to noise. It is necessary to adjust parameters while paying attention to this trade-off.

    "},{"location":"design/autoware-architecture/perception/reference-implementations/radar-based-3d-detector/faraway-object-detection/#limitation","title":"Limitation","text":"
    • Elevated railway, vehicles for multi-level intersection

    If you use 2D radars (The radar can detect in xy-axis 2D coordinate, but can not have z-axis detection) and driving area has elevated railway or vehicles for multi-level intersection, the radar process detects these these and these have a bad influence to planning results. In addition, for now, elevated railway is detected as vehicle because the radar process doesn't have label classification feature and it leads to unintended behavior.

    "},{"location":"design/autoware-architecture/perception/reference-implementations/radar-based-3d-detector/radar-based-3d-detector/","title":"Radar based 3D detector","text":""},{"location":"design/autoware-architecture/perception/reference-implementations/radar-based-3d-detector/radar-based-3d-detector/#radar-based-3d-detector","title":"Radar based 3D detector","text":""},{"location":"design/autoware-architecture/perception/reference-implementations/radar-based-3d-detector/radar-based-3d-detector/#overview","title":"Overview","text":""},{"location":"design/autoware-architecture/perception/reference-implementations/radar-based-3d-detector/radar-based-3d-detector/#features","title":"Features","text":"

    Radar based 3D detector aims for the following:

    • Detecting objects farther than the range of LiDAR-based 3D detection.

    Since radar can acquire data from a longer distance than LiDAR (> 100m), when the distance of LiDAR-based 3D detection is insufficient, the radar base 3D detector can be applied. The detection distance of radar based 3D detection depends on the radar device specification.

    • Improving velocity estimation for dynamic objects

    Radar can get velocity information and estimate more precise twist information by fused between the objects from LiDAR-based 3D detection radar information. This can lead to improve for the performance of object tracking/prediction and planning like adaptive cruise control.

    "},{"location":"design/autoware-architecture/perception/reference-implementations/radar-based-3d-detector/radar-based-3d-detector/#whole-pipeline","title":"Whole pipeline","text":"

    Radar based 3D detector with radar objects consists of

    • 3D object detection with Radar pointcloud
    • Noise filter
    • Faraway dynamic 3D object detection
    • Radar fusion to LiDAR-based 3D object detection
    • Radar object tracking
    • Merger of tracked object

    "},{"location":"design/autoware-architecture/perception/reference-implementations/radar-based-3d-detector/radar-based-3d-detector/#interface","title":"Interface","text":"
    • Input
      • Message type for pointcloud is ros-perception/radar_msgs/msg/RadarScan.msg
      • Message type for radar objects is autoware_auto_perception_msgs/msg/DetectedObject.
        • Input objects need to be concatenated.
        • Input objects need to be compensated with ego motion.
        • Input objects need to be transformed to base_link.
    • Output
      • Tracked objects
    "},{"location":"design/autoware-architecture/perception/reference-implementations/radar-based-3d-detector/radar-based-3d-detector/#module","title":"Module","text":""},{"location":"design/autoware-architecture/perception/reference-implementations/radar-based-3d-detector/radar-based-3d-detector/#radar-pointcloud-3d-detection","title":"Radar pointcloud 3D detection","text":"

    Warning

    Under Construction

    "},{"location":"design/autoware-architecture/perception/reference-implementations/radar-based-3d-detector/radar-based-3d-detector/#noise-filter-and-radar-faraway-dynamic-3d-object-detection","title":"Noise filter and radar faraway dynamic 3D object detection","text":"

    This function filters noise objects and detects faraway (> 100m) dynamic vehicles. The main idea is that in the case where LiDAR is used, near range can be detected accurately using LiDAR pointcloud and the main role of radar is to detect distant objects that cannot be detected with LiDAR alone. In detail, please see this document

    "},{"location":"design/autoware-architecture/perception/reference-implementations/radar-based-3d-detector/radar-based-3d-detector/#radar-fusion-to-lidar-based-3d-object-detection","title":"Radar fusion to LiDAR-based 3D object detection","text":"
    • radar_fusion_to_detected_object

    This package contains a sensor fusion module for radar-detected objects and 3D detected objects. The fusion node can:

    • Attach velocity to 3D detections when successfully matching radar data. The tracking modules use the velocity information to enhance the tracking results while planning modules use it to execute actions like adaptive cruise control.
    • Improve the low confidence 3D detections when corresponding radar detections are found.
    "},{"location":"design/autoware-architecture/perception/reference-implementations/radar-based-3d-detector/radar-based-3d-detector/#radar-object-tracking","title":"Radar object tracking","text":"

    Warning

    Under Construction

    "},{"location":"design/autoware-architecture/perception/reference-implementations/radar-based-3d-detector/radar-based-3d-detector/#merger-of-tracked-object","title":"Merger of tracked object","text":"

    Warning

    Under Construction

    "},{"location":"design/autoware-architecture/perception/reference-implementations/radar-based-3d-detector/radar-based-3d-detector/#appendix","title":"Appendix","text":""},{"location":"design/autoware-architecture/perception/reference-implementations/radar-based-3d-detector/radar-based-3d-detector/#customize-own-radar-interface","title":"Customize own radar interface","text":"

    The perception interface of Autoware is defined to DetectedObjects, TrackedObjects, and PredictedObjects, however, other message is defined by own cases. For example, DetectedObjectWithFeature is used by customized message in perception module.

    Same as that, you can adjust new radar interface. For example, RadarTrack doesn't have orientation information from past discussions, especially this discussion. If you want orientation information, you can adapt radar ROS driver to publish directly to TrackedObject.

    "},{"location":"design/autoware-architecture/planning/","title":"Planning component design","text":""},{"location":"design/autoware-architecture/planning/#planning-component-design","title":"Planning component design","text":""},{"location":"design/autoware-architecture/planning/#purpose","title":"Purpose","text":"

    The Planning Component in autonomous driving systems plays a crucial role in generating a target trajectory (path and speed) for autonomous vehicles. It ensures safety and adherence to traffic rules, fulfilling specific missions.

    This document outlines the planning requirements and design within Autoware, aiding developers in comprehending the design and extendibility of the Planning Component.

    The document is divided into two parts: the first part discusses high-level requirements and design, and the latter part focuses on actual implementations and functionalities provided.

    "},{"location":"design/autoware-architecture/planning/#goals-and-non-goals","title":"Goals and non-goals","text":"

    Our objective extends beyond merely developing an autonomous driving system. We aim to offer an \"autonomous driving platform\" where users can enhance autonomous driving functionalities based on their individual needs.

    In Autoware, we utilize the microautonomy architecture concept, which emphasizes high extensibility, functional modularity, and clearly defined interfaces.

    With this in mind, the design policy for the Planning Component is focused not on addressing every complex autonomous driving scenario (as that is a very challenging problem), but on providing a customizable and easily extendable Planning development platform. We believe this approach will allow the platform to meet a wide range of needs, ultimately solving many complex use cases.

    To clarify this policy, the Goals and Non-Goals are defined as follows:

    Goals:

    • The basic functions are provided so that a simple ODD can be defined
      • Before extending its functionalities, the Planning Component must provide the essential features necessary for autonomous driving. This encompasses basic operations like moving, stopping, and turning, as well as handling lane changes and obstacle avoidance in relatively safe and simple contexts.
    • The functionality is modularized for user-driven extension
      • The system is designed to adapt to various Operational Design Domains (ODDs) with extended functionalities. Modularization, akin to plug-ins, allows for creating systems tailored to diverse needs, such as different levels of autonomous driving and varied vehicular or environmental applications (e.g., Lv4/Lv2 autonomous driving, public/private road driving, large vehicles, small robots).
      • Reducing functionalities for specific ODDs, like obstacle-free private roads, is also a key aspect. This modular approach allows for reductions in power consumption or sensor requirements, aligning with specific user needs.
    • The capability is extensible with the decision of human operators
      • Incorporating operator assistance is a critical aspect of functional expansion. It means that the system can adapt to complex and challenging scenarios with human support. The specific type of operator is not defined here. It might be a person accompanying in the vehicle during the prototype development phase or a remote operator connected in emergencies during autonomous driving services.

    Non-goals:

    The Planning Component is designed to be extended with third-party modules. Consequently, the following are not the goals of Autoware's Planning Component:

    • To provide all user-required functionalities by default.
    • To provide complete functionality and performance characteristic of an autonomous driving system.
    • To provide performance that consistently surpasses human capabilities or ensures absolute safety.

    These aspects are specific to our vision of an autonomous driving \"platform\" and may not apply to a typical autonomous driving Planning Component.

    "},{"location":"design/autoware-architecture/planning/#high-level-design","title":"High level design","text":"

    This diagram illustrates the high-level architecture of the Planning Component. This represents an idealized design, and current implementations might vary. Further details on implementations are provided in the latter sections of this document.

    Following the principles of microautonomy architecture, we have adopted a modular system framework. The functions within the Planning domain are implemented as modules, dynamically or statically adaptable according to specific use cases. This includes modules for lane changes, intersection handling, and pedestrian crossings, among others.

    The Planning Component comprises several sub-components:

    • Mission Planning: This module calculates routes from the current location to the destination, utilizing map data. Its functionality is similar to that of Fleet Management Systems (FMS) or car navigation route planning.
    • Planning Modules: These modules plans the vehicle's behavior for the assigned mission, including target trajectory, blinker signaling, etc. They are divided into Behavior and Motion categories:
      • Behavior: Focuses on calculating safe and rule-compliant routes, managing decisions for lane changes, intersection entries, and stoppings at a stop line.
      • Motion: Works in cooperate with Behavior modules to determine the vehicle's trajectory, considering its motion and ride comfort. It includes lateral and longitudinal planning for route shaping and speed calculation.
    • Validation: Ensures the safety and appropriateness of the planned trajectories, with capabilities for emergency response. In cases where a planned trajectory is unsuitable, it triggers emergency protocols or generates alternative paths.
    "},{"location":"design/autoware-architecture/planning/#highlights","title":"Highlights","text":"

    Key aspects of this high-level design include:

    "},{"location":"design/autoware-architecture/planning/#modulation-of-each-function","title":"Modulation of each function","text":"

    Essential Planning functions, such as route generation, lane changes, and intersection management, are modularized. These modules come with standardized interfaces, enabling easy addition or modification. More details on these interfaces will be discussed in subsequent sections. You can see the details about how to enable/disable each module in the implementation documentation of Planning.

    "},{"location":"design/autoware-architecture/planning/#separation-of-mission-planning-sub-component","title":"Separation of Mission Planning sub-component","text":"

    Mission Planning serves as a substitute for functions typically found in existing services like FMS (Fleet Management System). Adherence to defined interfaces in the high-level design facilitates easy integration with third-party services.

    "},{"location":"design/autoware-architecture/planning/#separation-of-validation-sub-component","title":"Separation of Validation sub-component","text":"

    Given the extendable nature of the Planning Component, ensuring consistent safety levels across all functions is challenging. Therefore, the Validation function is managed independently from the core planning modules, maintaining a baseline of safety even for arbitrary changes of the planning modules.

    "},{"location":"design/autoware-architecture/planning/#interface-for-hmi-human-machine-interface","title":"Interface for HMI (Human Machine Interface)","text":"

    The HMI is designed for smooth cooperation with human operators. These interfaces enable coordination between the Planning Component and operators, whether in-vehicle or remote.

    "},{"location":"design/autoware-architecture/planning/#trade-offs-for-the-separation-of-planning-and-other-components","title":"Trade-offs for the separation of planning and other components","text":"

    In Autoware's overarching design, the separation of components like Planning, Perception, Localization, and Control facilitates cooperation with third-party modules. However, this separation entails trade-offs between performance and extensibility. For instance, the Perception component might process unnecessary objects due to its separation from Planning. Similarly, separating planning and control can pose challenges in accounting for vehicle dynamics during planning. To mitigate these issues, we might need to enhance interface information or increase computational efforts.

    "},{"location":"design/autoware-architecture/planning/#customize-features","title":"Customize features","text":"

    A significant feature of the Planning Component design is its ability to integrate with external modules. The diagram below shows various methods for incorporating external functionalities.

    "},{"location":"design/autoware-architecture/planning/#1-adding-new-modules-to-the-planning-component","title":"1. Adding New Modules to the Planning Component","text":"

    Users can augment or replace existing Planning functionalities with new modules. This approach is commonly used for extending features, allowing for the addition of capabilities absent in the desired ODD or simplification of existing features.

    However, adding these functionalities requires well-organized module interfaces. As of November 2023, an ideal modular system is not fully established, presenting some limitations. For more information, please refer to the Reference Implementation section Customize features in the current implementation and the implementation documentation of Planning.

    "},{"location":"design/autoware-architecture/planning/#2-replacing-sub-components-of-planning","title":"2. Replacing Sub-components of Planning","text":"

    Collaboration and extension at the sub-component level may interest some users. This could involve replacing Mission Planning with an existing FMS service or incorporating a third-party trajectory generation module while utilizing the existing Validation functionality.

    Adhering to the Internal interface in the planning component, collaboration and extension at this level are feasible. While complex coordination with existing Planning features may be limited, it allows for integration between certain Planning Component functionalities and external modules.

    "},{"location":"design/autoware-architecture/planning/#3-replacing-the-entire-planning-component","title":"3. Replacing the Entire Planning Component","text":"

    Organizations or research entities developing autonomous driving Planning systems might be interested in integrating their proprietary Planning solutions with Autoware's Perception or Control modules. This can be achieved by replacing the entire Planning system, following the robust and stable interfaces defined between components. It is important to note, however, that direct coordination with existing Planning modules might not be possible.

    "},{"location":"design/autoware-architecture/planning/#component-interface","title":"Component interface","text":"

    This section describes the inputs and outputs of the Planning Component and of its internal modules. See the Planning Component Interface page for the current implementation.

    "},{"location":"design/autoware-architecture/planning/#input-to-the-planning-component","title":"Input to the planning component","text":"
    • From Map
      • Vector map: Contains all static information about the environment, including lane connection information for route planning, lane geometry for generating a reference path, and traffic rule-related information.
    • From Perception
      • Detected object information: Provides real-time information about objects that cannot be known in advance, such as pedestrians and other vehicles. The Planning Component plans maneuvers to avoid collisions with these objects.
      • Detected obstacle information: Supplies real-time information about the location of obstacles, which is more primitive than Detected Object and used for emergency stops and other safety measures.
      • Occupancy map information: Offers real-time information about the presence of pedestrians and other vehicles and occluded area information.
      • Traffic light recognition result: Provides the current state of each traffic light in real time. The Planning Component extracts relevant information for the planned path and determines whether to stop at intersections.
    • From Localization
      • Vehicle motion information: Includes the ego vehicle's position, velocity, acceleration, and other motion-related data.
    • From System
      • Operation mode: Indicates whether the vehicle is operating in Autonomous mode.
    • From Human Machine Interface (HMI)
      • Feature execution: Allows for executing/authorizing autonomous driving operations, such as lane changes or entering intersections, by human operators.
    • From API Layer
      • Destination (Goal): Represents the final position that the Planning Component aims to reach.
      • Checkpoint: Represents a midpoint along the route to the destination. This is used during route calculation.
      • Velocity limit: Sets the maximum speed limit for the vehicle.
    "},{"location":"design/autoware-architecture/planning/#output-from-the-planning-component","title":"Output from the planning component","text":"
    • To Control
      • Trajectory: Provides a smooth sequence of pose, twist, and acceleration that the Control Component must follow. The trajectory is typically 10 seconds long with a 0.1-second resolution.
      • Turn Signals: Controls the vehicle's turn indicators, such as right, left, hazard, etc. based on the planned maneuvers.
    • To System
      • Diagnostics: Reports the state of the Planning Component, indicating whether the processing is running correctly and whether a safe plan is being generated.
    • To Human Machine Interface (HMI)
      • Feature execution availability: Indicates the status of operations that can be executed or are required, such as lane changes or entering intersections.
      • Trajectory candidate: Shows the potential trajectory that will be executed after the user's execution.
    • To API Layer
      • Planning factors: Provides information about the reasoning behind the current planning behavior. This may include the position of target objects to avoid, obstacles that led to the decision to stop, and other relevant information.
    "},{"location":"design/autoware-architecture/planning/#internal-interface-in-the-planning-component","title":"Internal interface in the planning component","text":"
    • Mission Planning to Scenario Planning
      • Route: Offers guidance for the path that needs to be followed from the starting point to the destination. This path is determined based on information such as lane IDs defined on the map. At the route level, it doesn't explicitly indicate which specific lanes to take, and the route can contain multiple lanes.
    • Behavior Planning to Motion Planning
      • Path: Provides a rough position and velocity to be followed by the vehicle. These path points are usually defined with an interval of about 1 meter. Although other interval distances are possible, it may impact the precision or performance of the planning component.
      • Drivable area: Defines regions where the vehicle can drive, such as within lanes or physically drivable areas. It assumes that the motion planner will calculate the final trajectory within this defined area.
    • Scenario Planning to Validation
      • Trajectory: Defines the desired positions, velocities, and accelerations which the Control Component will try to follow. Trajectory points are defined at intervals of approximately 0.1 seconds based on the trajectory velocities.
    • Validation to Control Component
      • Trajectory: Same as above but with some additional safety considerations.
    "},{"location":"design/autoware-architecture/planning/#detailed-design","title":"Detailed design","text":""},{"location":"design/autoware-architecture/planning/#supported-features","title":"Supported features","text":"Feature Description Requirements Figure Demonstration Route Planning Plan route from the ego vehicle position to the destination. Reference implementation is in Mission Planner, enabled by launching the mission_planner node. - Lanelet map (driving lanelets) Path Planning from Route Plan path to be followed from the given route. Reference implementation is in Behavior Path Planner. - Lanelet map (driving lanelets) Obstacle Avoidance Plan path to avoid obstacles by steering operation. Reference implementation is in Static Avoidance Module, Path Optimizer. Enable flag in parameter: launch path_optimizer true - objects information Demonstration Video Path Smoothing Plan path to achieve smooth steering. Reference implementation is in Path Optimizer. - Lanelet map (driving lanelet) Demonstration Video Narrow Space Driving Plan path to drive within the drivable area. Furthermore, when it is not possible to drive within the drivable area, stop the vehicle to avoid exiting the drivable area. Reference implementation is in Path Optimizer. - Lanelet map (high-precision lane boundaries) Demonstration Video Lane Change Plan path for lane change to reach the destination. Reference implementation is in Lane Change. - Lanelet map (driving lanelets) Demonstration Video Pull Over Plan path for pull over to park at the road shoulder. Reference implementation is in Goal Planner. - Lanelet map (shoulder lane) Demonstration Videos: Simple Pull Over Arc Forward Pull Over Arc Backward Pull Over Pull Out Plan path for pull over to start from the road shoulder. Reference implementation is in Start Planner. - Lanelet map (shoulder lane) Demonstration Video: Simple Pull Out Backward Pull Out Path Shift Plan path in lateral direction in response to external instructions. Reference implementation is in Side Shift Module. - None Obstacle Stop Plan velocity to stop for an obstacle on the path. Reference implementation is in Obstacle Stop Planner, Obstacle Cruise Planner. launch obstacle_stop_planner and enable flag: TODO, launch obstacle_cruise_planner and enable flag: TODO - objects information Demonstration Video Obstacle Deceleration Plan velocity to decelerate for an obstacle located around the path. Reference implementation is in Obstacle Stop Planner, Obstacle Cruise Planner. - objects information Demonstration Video Adaptive Cruise Control Plan velocity to follow the vehicle driving in front of the ego vehicle. Reference implementation is in Obstacle Stop Planner, Obstacle Cruise Planner. - objects information Decelerate for cut-in vehicles Plan velocity to avoid a risk for cutting-in vehicle to ego lane. Reference implementation is in Obstacle Cruise Planner. - objects information Surround Check at starting Plan velocity to prevent moving when an obstacle exists around the vehicle. Reference implementation is in Surround Obstacle Checker. Enable flag in parameter: use_surround_obstacle_check true in tier4_planning_component.launch.xml < - objects information Demonstration Video Curve Deceleration Plan velocity to decelerate the speed on a curve. Reference implementation is in Motion Velocity Smoother. - None Curve Deceleration for Obstacle Plan velocity to decelerate the speed on a curve for a risk of obstacle collision around the path. Reference implementation is in Obstacle Velocity Limiter. - objects information - Lanelet map (static obstacle) Demonstration Video Crosswalk Plan velocity to stop or decelerate for pedestrians approaching or walking on a crosswalk. Reference implementation is in Crosswalk Module. - objects information - Lanelet map (pedestrian crossing) Demonstration Video Intersection Oncoming Vehicle Check Plan velocity for turning right/left at intersection to avoid a risk with oncoming other vehicles. Reference implementation is in Intersection Module. - objects information - Lanelet map (intersection lane and yield lane) Demonstration Video Intersection Blind Spot Check Plan velocity for turning right/left at intersection to avoid a risk with other vehicles or motorcycles coming from behind blind spot. Reference implementation is in Blind Spot Module. - objects information - Lanelet map (intersection lane) Demonstration Video Intersection Occlusion Check Plan velocity for turning right/left at intersection to avoid a risk with the possibility of coming vehicles from occlusion area. Reference implementation is in Intersection Module. - objects information - Lanelet map (intersection lane) Demonstration Video Intersection Traffic Jam Detection Plan velocity for intersection not to enter the intersection when a vehicle is stopped ahead for a traffic jam. Reference implementation is in Intersection Module. - objects information - Lanelet map (intersection lane) Demonstration Video Traffic Light Plan velocity for intersection according to a traffic light signal. Reference implementation is in Traffic Light Module. - Traffic light color information Demonstration Video Run-out Check Plan velocity to decelerate for the possibility of nearby objects running out into the path. Reference implementation is in Run Out Module. - objects information Demonstration Video Stop Line Plan velocity to stop at a stop line. Reference implementation is in Stop Line Module. - Lanelet map (stop line) Demonstration Video Occlusion Spot Check Plan velocity to decelerate for objects running out from occlusion area, for example, from behind a large vehicle. Reference implementation is in Occlusion Spot Module. - objects information - Lanelet map (private/public lane) Demonstration Video No Stop Area Plan velocity not to stop in areas where stopping is prohibited, such as in front of the fire station entrance. Reference implementation is in No Stopping Area Module. - Lanelet map (no stopping area) Merge from Private Area to Public Road Plan velocity for entering the public road from a private driveway to avoid a risk of collision with pedestrians or other vehicles. Reference implementation is in Merge from Private Area Module. - objects information - Lanelet map (private/public lane) WIP Speed Bump Plan velocity to decelerate for speed bumps. Reference implementation is in Speed Bump Module. - Lanelet map (speed bump) Demonstration Video Detection Area Plan velocity to stop at the corresponding stop when an object exist in the designated detection area. Reference implementation is in Detection Area Module. - Lanelet map (detection area) Demonstration Video No Drivable Lane Plan velocity to stop before exiting the area designated by ODD (Operational Design Domain) or stop the vehicle if autonomous mode started in out of ODD lane. Reference implementation is in No Drivable Lane Module. - Lanelet map (no drivable lane) Collision Detection when deviating from lane Plan velocity to avoid conflict with other vehicles driving in the another lane when the ego vehicle is deviating from own lane. Reference implementation is in Out of Lane Module. - objects information - Lanelet map (driving lane) WIP Parking Plan path and velocity for given goal in parking area. Reference implementation is in Free Space Planner. - objects information - Lanelet map (parking area) Demonstration Video Autonomous Emergency Braking (AEB) Perform an emergency stop if a collision with an object ahead is anticipated. It is noted that this function is expected as a final safety layer, and this should work even in the event of failures in the Localization or Perception system. Reference implementation is in Out of Lane Module. - Primitive objects Minimum Risk Maneuver (MRM) Provide appropriate MRM (Minimum Risk Maneuver) instructions when a hazardous event occurs. For example, when a sensor trouble found, send an instruction for emergency braking, moderate stop, or pulling over to the shoulder, depending on the severity of the situation. Reference implementation is in TODO - TODO WIP Trajectory Validation Check the planned trajectory is safe. If it is unsafe, take appropriate action, such as modify the trajectory, stop sending the trajectory or report to the autonomous driving system. Reference implementation is in Planning Validator. - None Running Lane Map Generation Generate lane map from localization data recorded in manual driving. Reference implementation is in WIP - None WIP Running Lane Optimization Optimize the centerline (reference path) of the map to make it smooth considering the vehicle kinematics. Reference implementation is in Static Centerline Optimizer. - Lanelet map (driving lanes) WIP"},{"location":"design/autoware-architecture/planning/#reference-implementation","title":"Reference Implementation","text":"

    The following diagram describes the reference implementation of the Planning component. By adding new modules or extending the functionalities, various ODDs can be supported.

    Note that some implementation does not adhere to the high-level architecture design due to the difficulties of the implementation and require updating.

    For more details, please refer to the design documents in each package.

    • mission_planner: calculate route from start to goal based on the map information.
    • behavior_path_planner: calculates path and drivable area based on the traffic rules.
      • lane_following
      • lane_change
      • static_obstacle_avoidance
      • pull_over
      • pull_out
      • side_shift
    • behavior_velocity_planner: calculates max speed based on the traffic rules.
      • detection_area
      • blind_spot
      • cross_walk
      • stop_line
      • traffic_light
      • intersection
      • no_stopping_area
      • virtual_traffic_light
      • occlusion_spot
      • run_out
    • obstacle_avoidance_planner: calculate path shape under obstacle and drivable area constraints
    • surround_obstacle_checker: keeps the vehicle being stopped when there are obstacles around the ego-vehicle. It works only when the vehicle is stopped.
    • obstacle_stop_planner: When there are obstacles on or near the trajectory, it calculates the maximum velocity of the trajectory points depending on the situation: stopping, slowing down, or adaptive cruise (following the car).
      • stop
      • slow_down
      • adaptive_cruise
    • costmap_generator: generates a costmap for path generation from dynamic objects and lane information.
    • freespace_planner: calculates trajectory considering the feasibility (e.g. curvature) for the freespace scene. Algorithms are described here.
    • scenario_selector : chooses a trajectory according to the current scenario.
    • external_velocity_limit_selector: takes an appropriate velocity limit from multiple candidates.
    • motion_velocity_smoother: calculates final velocity considering velocity, acceleration, and jerk constraints.
    "},{"location":"design/autoware-architecture/planning/#important-information-in-the-current-implementation","title":"Important information in the current implementation","text":"

    An important difference compared to the high-level design is the \"introduction of the scenario layer\" and the \"clear separation of behavior and motion.\" These are introduced due to current performance and implementation challenges. Whether to define these as part of the high-level design or to improve them as part of the implementation is a matter of discussion.

    "},{"location":"design/autoware-architecture/planning/#introducing-the-scenario-planning-layer","title":"Introducing the Scenario Planning layer","text":"

    There are different requirements for interfaces between driving in well-structured lanes and driving in a free-space area like a parking lot. For example, while Lane Driving can handle routes with map IDs, this is not appropriate for planning in free space. The mechanism that switches planning sub-components at the scenario level (Lane Driving, Parking, etc) enables a flexible design of the interface, however, it has a drawbacks of the reuse of modules across different scenarios.

    "},{"location":"design/autoware-architecture/planning/#separation-of-behavior-and-motion","title":"Separation of Behavior and Motion","text":"

    One of the classic approach to Planning involves dividing it into \"Behavior\", which decides the action, and \"Motion\", which determines the final movement. However, this separation implies a trade-off with performance, as performance tends to degrade with increasing separation of functions. For example, Behavior needs to make decisions without prior knowledge of the computations that Motion will eventually perform, which generally results in conservative decision-making. On the other hand, if behavior and motion are integrated, motion performance and decision-making become interdependent, creating challenges in terms of expandability, such as when you wish to extend only the decision-making function to follow a regional traffic rules.

    To understand this background, this previously discussed document may be useful.

    "},{"location":"design/autoware-architecture/planning/#customize-features-in-the-current-implementation","title":"Customize features in the current implementation","text":"

    While it is possible to add module-level functionalities in the current implementation, a unified interface for all functionalities is not provided. Here's a brief explanation of the methods of extending at the module level in the current implementation.

    "},{"location":"design/autoware-architecture/planning/#add-new-modules-in-behavior_velocity_planner-or-behavior_path_planner","title":"Add new modules in behavior_velocity_planner or behavior_path_planner","text":"

    ROS nodes such as behavior_path_planner and behavior_velocity_planner have a module interface available through plugins. By adding modules in accordance with the module interfaces defined in these ROS nodes, dynamic loading/unloading of modules becomes possible. For specific methods of adding modules, please refer to the documentation of each package.

    "},{"location":"design/autoware-architecture/planning/#add-a-new-ros-node-in-the-planning-component","title":"Add a new ros node in the planning component","text":"

    When adding modules in Motion Planning, it is necessary to create the module as a ROS Node and integrate it into the planning component. The current configuration involves adding information to the target trajectory calculated upstream, and the introduction of a ROS Node in this process allows for the expansion of functionality.

    "},{"location":"design/autoware-architecture/planning/#add-or-replace-with-scenarios","title":"Add or replace with scenarios","text":"

    The current implementation has introduced a scenario-level switching logic as a method for collectively switching multiple modules. This allows for the addition of new scenarios (e.g., highway driving).

    By creating a scenario as a ros node and aligning the scenario_selector ros node with it, the integration is complete. The benefit of this is that you can introduce significant new functionalities without affecting the implementation of other scenarios (like Lane Driving). However, it only allows for scenario-level coordination through scenario switching and does not enable coordination at the existing planning module level.

    "},{"location":"design/autoware-architecture/sensing/","title":"Sensing component design","text":""},{"location":"design/autoware-architecture/sensing/#sensing-component-design","title":"Sensing component design","text":""},{"location":"design/autoware-architecture/sensing/#overview","title":"Overview","text":"

    Sensing component is a collection of modules that apply some primitive pre-processing to the raw sensor data.

    The sensor input formats are defined in this component.

    "},{"location":"design/autoware-architecture/sensing/#role","title":"Role","text":"
    • Abstraction of data formats to enable usage of sensors from various vendors
    • Perform common/primitive sensor data processing required by each component
    "},{"location":"design/autoware-architecture/sensing/#high-level-architecture","title":"High-level architecture","text":"

    This diagram describes the high-level architecture of the Sensing Component.

    "},{"location":"design/autoware-architecture/sensing/#inputs","title":"Inputs","text":""},{"location":"design/autoware-architecture/sensing/#input-types","title":"Input types","text":"Sensor Data Message Type Point cloud (LiDARs, depth cameras, etc.) sensor_msgs/msg/PointCloud2.msg Image (RGB, monochrome, depth, etc. cameras) sensor_msgs/msg/Image.msg Radar scan radar_msgs/msg/RadarScan.msg Radar tracks radar_msgs/msg/RadarTracks.msg GNSS-INS position sensor_msgs/msg/NavSatFix.msg GNSS-INS orientation autoware_sensing_msgs/GnssInsOrientationStamped.msg GNSS-INS velocity geometry_msgs/msg/TwistWithCovarianceStamped.msg GNSS-INS acceleration geometry_msgs/msg/AccelWithCovarianceStamped.msg IMU sensor_msgs/msg/Imu.msg Ultrasonics sensor_msgs/msg/Range.msg"},{"location":"design/autoware-architecture/sensing/#design-by-data-types","title":"Design by data-types","text":"
    • GNSS/INS data pre-processing design
    • Image pre-processing design
    • Point cloud pre-processing design
    • Radar pointcloud data pre-processing design
    • Radar objects data pre-processing design
    • Ultrasonics data pre-processing design
    "},{"location":"design/autoware-architecture/sensing/data-types/gnss-ins-data/","title":"GNSS/INS data pre-processing design","text":""},{"location":"design/autoware-architecture/sensing/data-types/gnss-ins-data/#gnssins-data-pre-processing-design","title":"GNSS/INS data pre-processing design","text":"

    Warning

    Under Construction

    "},{"location":"design/autoware-architecture/sensing/data-types/image/","title":"Image pre-processing design","text":""},{"location":"design/autoware-architecture/sensing/data-types/image/#image-pre-processing-design","title":"Image pre-processing design","text":"

    Warning

    Under Construction

    "},{"location":"design/autoware-architecture/sensing/data-types/point-cloud/","title":"Point cloud pre-processing design","text":""},{"location":"design/autoware-architecture/sensing/data-types/point-cloud/#point-cloud-pre-processing-design","title":"Point cloud pre-processing design","text":""},{"location":"design/autoware-architecture/sensing/data-types/point-cloud/#overview","title":"Overview","text":"

    Point cloud pre-processing is a collection of modules that apply some primitive pre-processing to the raw sensor data.

    This pipeline covers the flow of data from drivers to the perception stack.

    "},{"location":"design/autoware-architecture/sensing/data-types/point-cloud/#recommended-processing-pipeline","title":"Recommended processing pipeline","text":"
    graph TD\n    Driver[\"Lidar Driver\"] -->|\"Cloud XYZIRCAEDT\"| FilterPR[\"Polygon Remover Filter / CropBox Filter\"]\n\n    subgraph \"sensing\"\n    FilterPR -->|\"Cloud XYZIRCAEDT\"| FilterDC[\"Motion Distortion Corrector Filter\"]\n    FilterDC -->|\"Cloud XYZIRCAEDT\"| FilterOF[\"Outlier Remover Filter\"]\n    FilterOF -->|\"Cloud XYZIRC\"| FilterDS[\"Downsampler Filter\"]\n    FilterDS -->|\"Cloud XYZIRC\"| FilterTrans[\"Cloud Transformer\"]\n    FilterTrans -->|\"Cloud XYZIRC\"| FilterC\n\n    FilterX[\"...\"] -->|\"Cloud XYZIRC (i)\"| FilterC[\"Cloud Concatenator\"]\n    end\n\n    FilterC -->|\"Cloud XYZIRC\"| SegGr[\"Ground Segmentation\"]
    "},{"location":"design/autoware-architecture/sensing/data-types/point-cloud/#list-of-modules","title":"List of modules","text":"

    The modules used here are from pointcloud_preprocessor package.

    For details about the modules, see the following table.

    It is recommended that these modules are used in a single container as components. For details see ROS 2 Composition

    "},{"location":"design/autoware-architecture/sensing/data-types/point-cloud/#point-cloud-fields","title":"Point cloud fields","text":"

    The lidar driver is expected to output a point cloud with the PointXYZIRCAEDT point type.

    name datatype derived description X FLOAT32 false X position Y FLOAT32 false Y position Z FLOAT32 false Z position I (intensity) UINT8 false Measured reflectivity, intensity of the point R (return type) UINT8 false Laser return type for dual return lidars C (channel) UINT16 false Channel ID of the laser that measured the point A (azimuth) FLOAT32 true atan2(Y, X), Horizontal angle from the lidar origin to the point E (elevation) FLOAT32 true atan2(Z, D), Vertical angle from the lidar origin to the point D (distance) FLOAT32 true hypot(X, Y, Z), Euclidean distance from the lidar origin to the point T (time) UINT32 false Nanoseconds passed since the time of the header when this point was measured

    Note

    A (azimuth), E (elevation), and D (distance) fields are derived fields. They are provided by the driver to reduce the computational load on some parts of the perception stack.

    Warning

    Autoware supports conversion from PointXYZI to PointXYZIRC (with channel and return type set to 0) for prototyping purposes. However, this conversion is not recommended for production use since it is not efficient.

    "},{"location":"design/autoware-architecture/sensing/data-types/point-cloud/#intensity","title":"Intensity","text":"

    We will use following ranges for intensity, compatible with the VLP16 User Manual:

    Quoting from the VLP-16 User Manual:

    For each laser measurement, a reflectivity byte is returned in addition to distance. Reflectivity byte values are segmented into two ranges, allowing software to distinguish diffuse reflectors (e.g. tree trunks, clothing) in the low range from retroreflectors (e.g. road signs, license plates) in the high range. A retroreflector reflects light back to its source with a minimum of scattering. The VLP-16 provides its own light, with negligible separation between transmitting laser and receiving detector, so retroreflecting surfaces pop with reflected IR light compared to diffuse reflectors that tend to scatter reflected energy.

    • Diffuse reflectors report values from 0 to 100 for reflectivities from 0% to 100%.
    • Retroreflectors report values from 101 to 255, where 255 represents an ideal reflection.

    In a typical point cloud without retroreflectors, all intensity points will be between 0 and 100.

    Retroreflective Gradient road sign, Image Source

    But in a point cloud with retroreflectors, the intensity points will be between 0 and 255.

    "},{"location":"design/autoware-architecture/sensing/data-types/point-cloud/#intensity-mapping-for-other-lidar-brands","title":"Intensity mapping for other lidar brands","text":""},{"location":"design/autoware-architecture/sensing/data-types/point-cloud/#hesai-pandarxt16","title":"Hesai PandarXT16","text":"

    Hesai Pandar XT16 User Manual

    This lidar has 2 modes for reporting reflectivity:

    • Linear mapping
    • Non-linear mapping

    If you are using linear mapping mode, you should map from [0, 255] to [0, 100] when constructing the point cloud.

    If you are using non-linear mapping mode, you should map (hesai to autoware)

    • [0, 251] to [0, 100] and
    • [252, 254] to [101, 255]

    when constructing the point cloud.

    "},{"location":"design/autoware-architecture/sensing/data-types/point-cloud/#livox-mid-70","title":"Livox Mid-70","text":"

    Livox Mid-70 User Manual

    This lidar has 2 modes for reporting reflectivity similar to Velodyne VLP-16, only the ranges are slightly different.

    You should map (livox to autoware)

    • [0, 150] to [0, 100] and
    • [151, 255] to [101, 255]

    when constructing the point cloud.

    "},{"location":"design/autoware-architecture/sensing/data-types/point-cloud/#robosense-rs-lidar-16","title":"RoboSense RS-LiDAR-16","text":"

    RoboSense RS-LiDAR-16 User Manual

    No mapping required, same as Velodyne VLP-16.

    "},{"location":"design/autoware-architecture/sensing/data-types/point-cloud/#ouster-os-1-64","title":"Ouster OS-1-64","text":"

    Software User Manual v2.0.0 for all Ouster sensors

    In the manual it is stated:

    Reflectivity [16 bit unsigned int] - sensor Signal Photons measurements are scaled based on measured range and sensor sensitivity at that range, providing an indication of target reflectivity. Calibration of this measurement has not currently been rigorously implemented, but this will be updated in a future firmware release.

    So it is advised to map the 16 bit reflectivity to [0, 100] range.

    "},{"location":"design/autoware-architecture/sensing/data-types/point-cloud/#leishen-ch64w","title":"Leishen CH64W","text":"

    I couldn't get the english user manual, link of website

    In a user manual I was able to find it says:

    Byte 7 represents echo strength, and the value range is 0-255. (Echo strength can reflect the energy reflection characteristics of the measured object in the actual measurement environment. Therefore, the echo strength can be used to distinguish objects with different reflection characteristics.)

    So it is advised to map the [0, 255] to [0, 100] range.

    "},{"location":"design/autoware-architecture/sensing/data-types/point-cloud/#return-type","title":"Return type","text":"

    Various lidars support multiple return modes. Velodyne lidars support Strongest and Last return modes.

    In the PointXYZIRC and PointXYZIRCAEDT types, the R field represents the return type with a UINT8. The return type is vendor-specific. The following table provides an example of return type definitions.

    R (return type) Description 0 Unknown / Not Marked 1 Strongest 2 Last"},{"location":"design/autoware-architecture/sensing/data-types/point-cloud/#channel","title":"Channel","text":"

    The channel field is used to identify the vertical channel of the laser that measured the point. In various lidar manuals or literature, it can also be called laser id, ring, laser line.

    For Velodyne VLP-16, there are 16 channels. Default order of channels in drivers are generally in firing order.

    In the PointXYZIRC and PointXYZIRCAEDT types, the C field represents the vertical channel ID with a UINT16.

    "},{"location":"design/autoware-architecture/sensing/data-types/point-cloud/#azimuth","title":"Azimuth","text":"

    The azimuth field gives the horizontal angle between the optical origin of the lidar and the point. Many lidar measure this with the angle of the rotary encoder when the laser was fired, and the driver typically corrects the value based on calibration data.

    In the PointXYZIRCAEDT type, the A field represents the azimuth angle in radians (clockwise) with a FLOAT32.

    "},{"location":"design/autoware-architecture/sensing/data-types/point-cloud/#elevation","title":"Elevation","text":"

    The elevation field gives the vertical angle between the optical origin of the lidar and the point. In the PointXYZIRCAEDT type, the E field represents the elevation angle in radians (clockwise) with a FLOAT32.

    "},{"location":"design/autoware-architecture/sensing/data-types/point-cloud/#solid-state-and-petal-pattern-lidars","title":"Solid state and petal pattern lidars","text":"

    Warning

    This section is subject to change. Following are suggestions and open for discussion.

    For solid state lidars that have lines, assign row number as the channel id.

    For petal pattern lidars, you can keep channel 0.

    "},{"location":"design/autoware-architecture/sensing/data-types/point-cloud/#time-stamp","title":"Time stamp","text":"

    In lidar point clouds, each point measurement can have its individual time stamp. This information can be used to eliminate the motion blur that is caused by the movement of the lidar during the scan.

    "},{"location":"design/autoware-architecture/sensing/data-types/point-cloud/#point-cloud-header-time","title":"Point cloud header time","text":"

    The header contains a Time field. The time field has 2 components:

    Field Type Description sec int32 Unix time (seconds elapsed since January 1, 1970) nanosec uint32 Nanoseconds elapsed since the sec field

    The header of the point cloud message is expected to have the time of the earliest point it has.

    Note

    The sec field is int32 in ROS 2 humble. The largest value it can represent is 2^31 seconds, it is subject to year 2038 problems. We will wait for actions on ROS 2 community side.

    More info at: https://github.com/ros2/rcl_interfaces/issues/85

    "},{"location":"design/autoware-architecture/sensing/data-types/point-cloud/#individual-point-time-stamp","title":"Individual point time stamp","text":"

    Each PointXYZIRCAEDT point type has the T field for representing the nanoseconds passed since the first-shot point of the point cloud.

    To calculate exact time each point was shot, the T nanoseconds are added to the header time.

    Note

    The T field is uint32 type. The largest value it can represent is 2^32 nanoseconds, which equates to roughly 4.29 seconds. Usual point clouds don't last more than 100ms for full cycle. So this field should be enough.

    "},{"location":"design/autoware-architecture/sensing/data-types/ultrasonics-data/","title":"Ultrasonics data pre-processing design","text":""},{"location":"design/autoware-architecture/sensing/data-types/ultrasonics-data/#ultrasonics-data-pre-processing-design","title":"Ultrasonics data pre-processing design","text":"

    Warning

    Under Construction

    "},{"location":"design/autoware-architecture/sensing/data-types/radar-data/radar-objects-data/","title":"Radar objects data pre-processing design","text":""},{"location":"design/autoware-architecture/sensing/data-types/radar-data/radar-objects-data/#radar-objects-data-pre-processing-design","title":"Radar objects data pre-processing design","text":""},{"location":"design/autoware-architecture/sensing/data-types/radar-data/radar-objects-data/#overview","title":"Overview","text":""},{"location":"design/autoware-architecture/sensing/data-types/radar-data/radar-objects-data/#pipeline","title":"Pipeline","text":"

    This diagram describes the pre-process pipeline for radar objects.

    "},{"location":"design/autoware-architecture/sensing/data-types/radar-data/radar-objects-data/#interface","title":"Interface","text":"
    • Input
      • Radar data from device
      • Twist information of ego vehicle motion
    • Output
      • Merged radar objects information
    "},{"location":"design/autoware-architecture/sensing/data-types/radar-data/radar-objects-data/#note","title":"Note","text":"
    • The radar pre-process package filters noise through the ros-perception/radar_msgs/msg/RadarTrack.msg message type with sensor coordinate.
    • It is recommended to change the coordinate system from each sensor to base_link with message converter.
    • If there are multiple radar objects, the object merger package concatenates these objects.
    "},{"location":"design/autoware-architecture/sensing/data-types/radar-data/radar-objects-data/#input","title":"Input","text":"

    Autoware uses radar objects data type as radar_msgs/msg/RadarTracks.msg. In detail, please see Data message for radars.

    "},{"location":"design/autoware-architecture/sensing/data-types/radar-data/radar-objects-data/#reference-implementations","title":"Reference implementations","text":""},{"location":"design/autoware-architecture/sensing/data-types/radar-data/radar-objects-data/#device-driver-for-radars","title":"Device driver for radars","text":"

    Autoware supports ros-perception/radar_msgs/msg/RadarScan.msg and autoware_auto_perception_msgs/msg/TrackedObjects.msg for Radar drivers.

    In detail, please see Device driver for radars.

    "},{"location":"design/autoware-architecture/sensing/data-types/radar-data/radar-objects-data/#noise-filter","title":"Noise filter","text":"
    • radar_tracks_noise_filter

    Radar can detect x-axis velocity as doppler velocity, but cannot detect y-axis velocity. Some radar can estimate y-axis velocity inside the device, but it sometimes lack precision. This package treats these objects as noise by y-axis threshold filter.

    "},{"location":"design/autoware-architecture/sensing/data-types/radar-data/radar-objects-data/#message-converter","title":"Message converter","text":"
    • radar_tracks_msgs_converter

    This package converts from radar_msgs/msg/RadarTracks into autoware_auto_perception_msgs/msg/DetectedObject with ego vehicle motion compensation and coordinate transform.

    "},{"location":"design/autoware-architecture/sensing/data-types/radar-data/radar-objects-data/#object-merger","title":"Object merger","text":"
    • object_merger

    This package can merge 2 topics of autoware_auto_perception_msgs/msg/DetectedObject.

    • simple_object_merger

    This package can merge simply multiple topics of autoware_auto_perception_msgs/msg/DetectedObject. Different from object_merger, this package doesn't use association algorithm and can merge with low calculation cost.

    • topic_tools

    If a vehicle has 1 radar, the topic relay tool in topic_tools can be used. Example is as below.

    <launch>\n<group>\n<push-ros-namespace namespace=\"radar\"/>\n<group>\n<push-ros-namespace namespace=\"front_center\"/>\n<include file=\"$(find-pkg-share example_launch)/launch/ars408.launch.xml\">\n<arg name=\"interface\" value=\"can0\" />\n</include>\n</group>\n<node pkg=\"topic_tools\" exec=\"relay\" name=\"radar_relay\" output=\"log\" args=\"front_center/detected_objects detected_objects\"/>\n</group>\n</launch>\n
    "},{"location":"design/autoware-architecture/sensing/data-types/radar-data/radar-objects-data/#appendix","title":"Appendix","text":""},{"location":"design/autoware-architecture/sensing/data-types/radar-data/radar-objects-data/#discussion","title":"Discussion","text":"

    Radar architecture design is discussed as below.

    • Discussion 2531
    • Discussion 2532.
    "},{"location":"design/autoware-architecture/sensing/data-types/radar-data/radar-pointcloud-data/","title":"Radar pointcloud data pre-processing design","text":""},{"location":"design/autoware-architecture/sensing/data-types/radar-data/radar-pointcloud-data/#radar-pointcloud-data-pre-processing-design","title":"Radar pointcloud data pre-processing design","text":""},{"location":"design/autoware-architecture/sensing/data-types/radar-data/radar-pointcloud-data/#overview","title":"Overview","text":""},{"location":"design/autoware-architecture/sensing/data-types/radar-data/radar-pointcloud-data/#pipeline","title":"Pipeline","text":"

    This diagram describes the pre-process pipeline for radar pointcloud.

    "},{"location":"design/autoware-architecture/sensing/data-types/radar-data/radar-pointcloud-data/#interface","title":"Interface","text":"
    • Input
      • Radar data from device
      • Twist information of ego vehicle motion
    • Output
      • Dynamic radar pointcloud (ros-perception/radar_msgs/msg/RadarScan.msg)
      • Noise filtered radar pointcloud (sensor_msgs/msg/Pointcloud2.msg)
    "},{"location":"design/autoware-architecture/sensing/data-types/radar-data/radar-pointcloud-data/#note","title":"Note","text":"
    • In the sensing layer, the radar pre-process packages filter noise through the ros-perception/radar_msgs/msg/RadarScan.msg message type with sensor coordinate.
    • For use of radar pointcloud data by LiDAR packages, we would like to propose a converter for creating sensor_msgs/msg/Pointcloud2.msg from ros-perception/radar_msgs/msg/RadarScan.msg.
    "},{"location":"design/autoware-architecture/sensing/data-types/radar-data/radar-pointcloud-data/#reference-implementations","title":"Reference implementations","text":""},{"location":"design/autoware-architecture/sensing/data-types/radar-data/radar-pointcloud-data/#data-message-for-radars","title":"Data message for radars","text":"

    Autoware uses radar objects data type as radar_msgs/msg/RadarScan.msg. In detail, please see Data message for radars.

    "},{"location":"design/autoware-architecture/sensing/data-types/radar-data/radar-pointcloud-data/#device-driver-for-radars","title":"Device driver for radars","text":"

    Autoware support ros-perception/radar_msgs/msg/RadarScan.msg and autoware_auto_perception_msgs/msg/TrackedObjects.msg for Radar drivers.

    In detail, please see Device driver for radars.

    "},{"location":"design/autoware-architecture/sensing/data-types/radar-data/radar-pointcloud-data/#basic-noise-filter","title":"Basic noise filter","text":"
    • radar_threshold_filter

    This package removes pointcloud noise with low amplitude, edge angle, too near pointcloud by threshold. The noise depends on the radar devices and installation location.

    "},{"location":"design/autoware-architecture/sensing/data-types/radar-data/radar-pointcloud-data/#filter-to-staticdynamic-pointcloud","title":"Filter to static/dynamic pointcloud","text":"
    • radar_static_pointcloud_filter

    This package extracts static/dynamic radar pointcloud by using doppler velocity and ego motion. The static radar pointcloud can be used for localization like NDT scan matching, and the dynamic radar pointcloud can be used for dynamic object detection.

    "},{"location":"design/autoware-architecture/sensing/data-types/radar-data/radar-pointcloud-data/#message-converter-from-radarscan-to-pointcloud2","title":"Message converter from RadarScan to Pointcloud2","text":"
    • radar_scan_to_pointcloud2

    For convenient use of radar pointcloud within existing LiDAR packages, we suggest a radar_scan_to_pointcloud2_convertor package for conversion from ros-perception/radar_msgs/msg/RadarScan.msg to sensor_msgs/msg/Pointcloud2.msg.

    LiDAR package Radar package message sensor_msgs/msg/Pointcloud2.msg ros-perception/radar_msgs/msg/RadarScan.msg coordinate (x, y, z) (r, \u03b8, \u03c6) value intensity amplitude, doppler velocity

    For considered use cases,

    • Use pointcloud_preprocessor for radar scan.
    • Apply obstacle segmentation like ground segmentation to radar points for LiDAR-less (camera + radar) systems.
    "},{"location":"design/autoware-architecture/sensing/data-types/radar-data/radar-pointcloud-data/#appendix","title":"Appendix","text":""},{"location":"design/autoware-architecture/sensing/data-types/radar-data/radar-pointcloud-data/#discussion","title":"Discussion","text":"

    Radar architecture design is discussed as below.

    • Discussion 2531
    • Discussion 2532.
    "},{"location":"design/autoware-architecture/sensing/data-types/radar-data/reference-implementations/data-message/","title":"Data message for radars","text":""},{"location":"design/autoware-architecture/sensing/data-types/radar-data/reference-implementations/data-message/#data-message-for-radars","title":"Data message for radars","text":""},{"location":"design/autoware-architecture/sensing/data-types/radar-data/reference-implementations/data-message/#summary","title":"Summary","text":"

    To sum up, Autoware uses radar data type as below.

    • radar_msgs/msg/RadarScan.msg for radar pointcloud
    • radar_msgs/msg/RadarTracks.msg for radar objects.
    "},{"location":"design/autoware-architecture/sensing/data-types/radar-data/reference-implementations/data-message/#radar-data-message-for-pointcloud","title":"Radar data message for pointcloud","text":""},{"location":"design/autoware-architecture/sensing/data-types/radar-data/reference-implementations/data-message/#message-definition","title":"Message definition","text":"
    • ros2/msg/RadarScan.msg
    std_msgs/Header header\nradar_msgs/RadarReturn[] returns\n
    • ros2/msg/RadarReturn.msg
    # All variables below are relative to the radar's frame of reference.\n# This message is not meant to be used alone but as part of a stamped or array message.\n\nfloat32 range                            # Distance (m) from the sensor to the detected return.\nfloat32 azimuth                          # Angle (in radians) in the azimuth plane between the sensor and the detected return.\n#    Positive angles are anticlockwise from the sensor and negative angles clockwise from the sensor as per REP-0103.\nfloat32 elevation                        # Angle (in radians) in the elevation plane between the sensor and the detected return.\n#    Negative angles are below the sensor. For 2D radar, this will be 0.\nfloat32 doppler_velocity                 # The doppler speeds (m/s) of the return.\nfloat32 amplitude                        # The amplitude of the of the return (dB)\n
    "},{"location":"design/autoware-architecture/sensing/data-types/radar-data/reference-implementations/data-message/#radar-data-message-for-tracked-objects","title":"Radar data message for tracked objects","text":""},{"location":"design/autoware-architecture/sensing/data-types/radar-data/reference-implementations/data-message/#message-definition_1","title":"Message definition","text":"
    • radar_msgs/msg/RadarTrack.msg
    # Object classifications (Additional vendor-specific classifications are permitted starting from 32000 eg. Car)\nuint16 NO_CLASSIFICATION=0\nuint16 STATIC=1\nuint16 DYNAMIC=2\n\nunique_identifier_msgs/UUID uuid            # A unique ID of the object generated by the radar.\n# Note: The z component of these fields is ignored for 2D tracking.\ngeometry_msgs/Point position                # x, y, z coordinates of the centroid of the object being tracked.\ngeometry_msgs/Vector3 velocity              # The velocity of the object in each spatial dimension.\ngeometry_msgs/Vector3 acceleration          # The acceleration of the object in each spatial dimension.\ngeometry_msgs/Vector3 size                  # The object size as represented by the radar sensor eg. length, width, height OR the diameter of an ellipsoid in the x, y, z, dimensions\n# and is from the sensor frame's view.\nuint16 classification                       # An optional classification of the object (see above)\nfloat32[6] position_covariance              # Upper-triangle covariance about the x, y, z axes\nfloat32[6] velocity_covariance              # Upper-triangle covariance about the x, y, z axes\nfloat32[6] acceleration_covariance          # Upper-triangle covariance about the x, y, z axes\nfloat32[6] size_covariance                  # Upper-triangle covariance about the x, y, z axes\n
    "},{"location":"design/autoware-architecture/sensing/data-types/radar-data/reference-implementations/data-message/#message-usage-for-radartracks","title":"Message usage for RadarTracks","text":"
    • Object classifications

    In object classifications of radar_msgs/msg/RadarTrack.msg, additional classifications label can be used by the number starting from 32000.

    To express for Autoware label definition, Autoware defines object classifications for RadarTracks.msg as below.

    uint16 UNKNOWN = 32000;\nuint16 CAR = 32001;\nuint16 TRUCK = 32002;\nuint16 BUS = 32003;\nuint16 TRAILER = 32004;\nuint16 MOTORCYCLE = 32005;\nuint16 BICYCLE = 32006;\nuint16 PEDESTRIAN = 32007;\n

    For detail implementation, please see radar_tracks_msgs_converter.

    "},{"location":"design/autoware-architecture/sensing/data-types/radar-data/reference-implementations/data-message/#note","title":"Note","text":""},{"location":"design/autoware-architecture/sensing/data-types/radar-data/reference-implementations/data-message/#survey-for-radar-message","title":"Survey for radar message","text":"

    Depending on the sensor manufacturer and its purpose, each sensor might exchange raw, post-processed data. This section introduces a survey about the previously developed messaging systems in the open-source community. Although there are many kinds of outputs, radar mainly adopt two types as outputs, pointcloud and objects. Related discussion for message definition in ros-perception are PR #1, PR #2, and PR #3. Existing open source softwares for radar are summarized in these PR.

    "},{"location":"design/autoware-architecture/sensing/data-types/radar-data/reference-implementations/device-driver/","title":"Device driver for radars","text":""},{"location":"design/autoware-architecture/sensing/data-types/radar-data/reference-implementations/device-driver/#device-driver-for-radars","title":"Device driver for radars","text":""},{"location":"design/autoware-architecture/sensing/data-types/radar-data/reference-implementations/device-driver/#interface-for-radar-devices","title":"Interface for radar devices","text":"

    The radar driver converts the communication data from radars to ROS 2 topics. The following communication types are considered:

    • CAN
    • CAN-FD
    • Ethernet

    "},{"location":"design/autoware-architecture/sensing/data-types/radar-data/reference-implementations/device-driver/#software-interface","title":"Software Interface","text":"

    Autoware support ros-perception/radar_msgs/msg/RadarScan.msg and autoware_auto_perception_msgs/msg/TrackedObjects.msg for Radar drivers.

    ROS driver receive the data from radar devices.

    • Scan (pointcloud)
    • Tracked objects
    • Diagnostics results

    The ROS drivers receives the following data from the radar device.

    • Time synchronization
    • Ego vehicle status, which often need for tracked objects calculation
    "},{"location":"design/autoware-architecture/sensing/data-types/radar-data/reference-implementations/device-driver/#radar-driver-example","title":"Radar driver example","text":"
    • ARS408 driver
    "},{"location":"design/autoware-architecture/vehicle/","title":"Vehicle Interface design","text":""},{"location":"design/autoware-architecture/vehicle/#vehicle-interface-design","title":"Vehicle Interface design","text":""},{"location":"design/autoware-architecture/vehicle/#abstract","title":"Abstract","text":"

    The Vehicle Interface component provides an interface between Autoware and a vehicle that passes control signals to the vehicle\u2019s drive-by-wire system and receives vehicle information that is passed back to Autoware.

    "},{"location":"design/autoware-architecture/vehicle/#1-requirements","title":"1. Requirements","text":"

    Goals:

    • The Vehicle Interface component converts Autoware commands to a vehicle-specific format and converts vehicle status in a vehicle-specific format to Autoware messages.
    • The interface between Autoware and the Vehicle component is abstracted and independent of hardware.
    • The interface is extensible such that additional vehicle-specific commands can be easily added. For example, headlight control.

    Non-goals:

    • Accuracy of responses from the vehicle will not be defined, but example accuracy requirements from reference designs are provided as examples.
    • Response speed will not be defined.
    "},{"location":"design/autoware-architecture/vehicle/#2-architecture","title":"2. Architecture","text":"

    The Vehicle Interface component consists of the following components:

    • A Raw Vehicle Command Converter component that will pass through vehicle commands from the Control component if velocity/acceleration control is supported by the drive-by-wire system. Otherwise, the Control commands will be modified according to the control method (eg: converting a target acceleration from the Control component to a vehicle specific accel/brake pedal value through the use of an acceleration map)
    • A Vehicle Interface component (vehicle specific) that acts as an interface between Autoware and a vehicle to communicate control signals and to obtain information about the vehicle (steer output, tyre angle etc)

    Each component contains static nodes of Autoware, while each module can be dynamically loaded and unloaded (corresponding to C++ classes). The mechanism of the Vehicle Interface component is depicted by the following figures:

    "},{"location":"design/autoware-architecture/vehicle/#3-features","title":"3. Features","text":"

    The Vehicle Interface component can provide the following features in functionality and capability:

    • Basic functions

      • Converting Autoware control commands to vehicle specific command
      • Converting vehicle specific status information (velocity, steering) to Autoware status message
    • Diagnostics
      • List available features
      • Provide a warning if the Control component tries to use a feature that is not available in the Vehicle Interface component

    Additional functionality and capability features may be added, depending on the vehicle hardware. Some example features are listed below:

    • Safety features
      • Disengage autonomous driving via manual intervention.
        • This can be done through the use of an emergency disengage button, or by a safety driver manually turning the steering wheel or pressing the brake
    • Optional controls
      • Turn indicator
      • Handbrake
      • Headlights
      • Hazard lights
      • Doors
      • Horn
      • Wipers
    "},{"location":"design/autoware-architecture/vehicle/#4-interface-and-data-structure","title":"4. Interface and Data Structure","text":"

    The interface of the Vehicle Interface component for other components running in the same process space to access the functionality and capability of the Vehicle Interface component is defined as follows.

    From Control

    • Actuation Command
      • target acceleration, braking, and steering angle

    From Planning

    • Vehicle Specific Commands (optional and a separate message for each type)
      • Shift
      • Door
      • Wiper
      • etc

    From the vehicle

    • Vehicle status messages
      • Vehicle-specific format messages for conversion into Autoware-specific format messages
        • Velocity status
        • Steering status (optional)
        • Shift status (optional)
        • Turn signal status (optional)
        • Actuation status (optional)

    The output interface of the Vehicle Interface component:

    • Vehicle control messages to the vehicle
      • Control signals to drive the vehicle
      • Depends on the vehicle type/protocol, but should include steering and velocity commands at a minimum
    • Vehicle status messages to Autoware
    • Actuation Status
      • Acceleration, brake, steering status
    • Vehicle odometry (output to Localization)
      • Vehicle twist information
    • Control mode
      • Information about whether the vehicle is under autonomous control or manual control
    • Shift status (optional)
      • Vehicle shift status
    • Turn signal status (optional)
      • Vehicle turn signal status

    The data structure for the internal representation of semantics for the objects and trajectories used in the Vehicle Interface component is defined as follows:

    "},{"location":"design/autoware-architecture/vehicle/#5-concerns-assumptions-and-limitations","title":"5. Concerns, Assumptions, and Limitations","text":"

    Concerns

    • Architectural trade-offs and scalability

    Assumptions

    -

    Limitations

    "},{"location":"design/autoware-architecture/vehicle/#6-examples-of-accuracy-requirements-by-odd","title":"6. Examples of accuracy requirements by ODD","text":""},{"location":"design/autoware-concepts/","title":"Autoware concepts","text":""},{"location":"design/autoware-concepts/#autoware-concepts","title":"Autoware concepts","text":"

    Autoware is the world\u2019s first open-source software for autonomous driving systems. Autoware provides value for both The technology developers of autonomous driving systems can create new components based on Autoware. The service operators of autonomous driving systems, on the other hand, can select appropriate technology components with Autoware. This is enabled by the microautonomy architecture that modularizes its software stack into the core and universe subsystems (modules).

    "},{"location":"design/autoware-concepts/#microautonomy-architecture","title":"Microautonomy architecture","text":"

    Autoware uses a pipeline architecture to enable the development of autonomous driving systems. The pipeline architecture used in Autoware consists of components similar to three-layer-architecture. And they run in parallel. There are 2 main modules: the Core and the Universe. The components in these modules are designed to be extensible and reusable. And we call it microautonomy architecture.

    "},{"location":"design/autoware-concepts/#the-core-module","title":"The Core module","text":"

    The Core module contains basic runtimes and technology components that satisfy the basic functionality and capability of sensing, computing, and actuation required for autonomous driving systems. AWF develops and maintains the Core module with their architects and leading members through their working groups. Anyone can contribute to the Core but the PR(Pull Request) acceptance criteria is more strict compared to the Universe.

    "},{"location":"design/autoware-concepts/#the-universe-module","title":"The Universe module","text":"

    The Universe modules are extensions to the Core module that can be provided by the technology developers to enhance the functionality and capability of sensing, computing, and actuation. AWF provides the base Universe module to extend from. A key feature of the microautonomy architecture is that the Universe modules can be contributed to by any organization and individual. That is, you can even create your Universe and make it available for the Autoware community and ecosystem. AWF is responsible for quality control of the Universe modules through their development process. As a result, there are multiple types of the Universe modules - some are verified and validated by AWF and others are not. It is up to the users of Autoware which Universe modules are selected and integrated to build their end applications.

    "},{"location":"design/autoware-concepts/#interface-design","title":"Interface design","text":"

    The interface design is the most essential piece of the microautonomy architecture, which is classified into internal and external interfaces. The component interface is designed for the components in a Universe module to communicate with those in other modules, including the Core module, within Autoware internally. The AD(Autonomous Driving) API, on the other hand, is designed for the applications of Autoware to access the technology components in the Core and Universe modules of Autoware externally. Designing solid interfaces, the microautonomy architecture is made possible with AWF's partners, and at the same time is made feasible for the partners.

    "},{"location":"design/autoware-concepts/#challenges","title":"Challenges","text":"

    A grand challenge of the microautonomy architecture is to achieve real-time capability, which guarantees all the technology components activated in the system to predictably meet timing constraints (given deadlines). In general, it is difficult, if not impossible, to tightly estimate the worst-case execution times (WCETs) of components.

    In addition, it is also difficult, if not impossible, to tightly estimate the end-to-end latency of components connected by a DAG. Autonomous driving systems based on the microautonomy architecture, therefore, must be designed to be fail-safe but not never-fail. We accept that the timing constraints may be violated (the given deadlines may be missed) as far as the overrun is taken into account. The overrun handlers are two-fold: (i) platform-defined and (ii) user-defined. The platform-defined handler is implemented as part of the platform by default, while the user-defined handler can overwrite it or add a new handler to the system. This is what we call \u201cfail-safe\u201d on a timely basis.

    "},{"location":"design/autoware-concepts/#requirements-and-roadmap","title":"Requirements and roadmap","text":"

    Goals:

    • All open-source
    • Use case driven
    • Real-time (predictable) framework with overrun handling
    • Code quality
    "},{"location":"design/autoware-concepts/difference-from-ai-and-auto/","title":"How is Autoware Core/Universe different from Autoware.AI and Autoware.Auto?","text":""},{"location":"design/autoware-concepts/difference-from-ai-and-auto/#how-is-autoware-coreuniverse-different-from-autowareai-and-autowareauto","title":"How is Autoware Core/Universe different from Autoware.AI and Autoware.Auto?","text":"

    Autoware is the world's first \"all-in-one\" open-source software for self-driving vehicles. Since it was first released in 2015, there have been multiple releases made with differing underlying concepts, each one aimed at improving the software.

    "},{"location":"design/autoware-concepts/difference-from-ai-and-auto/#autowareai","title":"Autoware.AI","text":"

    Autoware.AI is the first distribution of Autoware that was released based on ROS 1. The repository contains a variety of packages covering different aspects of autonomous driving technologies - sensing, actuation, localization, mapping, perception and planning.

    While it was successful in attracting many developers and contributions, it was difficult to improve Autoware.AI's capabilities for a number of reasons:

    • A lack of concrete architecture design leading to a lot of built-up technical debt, such as tight coupling between modules and unclear module responsibility.
    • Differing coding standards for each package, with very low test coverage.

    Furthermore, there was no clear definition of the conditions under which an Autoware-enabled autonomous vehicle could operate, nor of the use cases or situations supported (eg: the ability to overtake a stationary vehicle).

    From the lessons learned from Autoware.AI development, a different development process was taken for Autoware.Auto to develop a ROS 2 version of Autoware.

    Warning

    Autoware.AI is currently in maintenance mode and will reach end-of-life at the end of 2022.

    "},{"location":"design/autoware-concepts/difference-from-ai-and-auto/#autowareauto","title":"Autoware.Auto","text":"

    Autoware.Auto is the second distribution of Autoware that was released based on ROS 2. As part of the transition to ROS 2, it was decided to avoid simply porting Autoware.AI from ROS 1 to ROS 2. Instead, the codebase was rewritten from scratch with proper engineering practices, including defining target use cases and ODDs (eg: Autonomous Valet Parking [AVP], Cargo Delivery, etc.), designing a proper architecture, writing design documents and test code.

    Autoware.Auto development seemed to work fine initially, but after completing the AVP and and Cargo Delivery ODD projects, we started to see the following issues:

    • The barrier to new engineers was too high.
      • A lot of work was required to merge new features into Autoware.Auto, and so it was difficult for researchers and students to contribute to development.
      • As a consequence, most Autoware.Auto developers were from companies in the Autoware Foundation and so there were very few people who were able to add state-of-the-art features from research papers.
    • Making large-scale architecture changes was too difficult.
      • To try out experimental architecture, there was a very large overhead involved in keeping the main branch stable whilst also making sure that every change satisfied the continuous integration requirements.
    "},{"location":"design/autoware-concepts/difference-from-ai-and-auto/#autoware-coreuniverse","title":"Autoware Core/Universe","text":"

    In order to address the issues with Autoware.Auto development, the Autoware Foundation decided to create a new architecture called Autoware Core/Universe.

    Autoware Core carries over the original policy of Autoware.Auto to be a stable and well-tested codebase. Alongside Autoware Core is a new concept called Autoware Universe, which acts as an extension of Autoware Core with the following benefits:

    • Users can easily replace a Core component with a Universe equivalent in order to use more advanced features, such as a new Localization or Perception algorithm.
    • Code quality requirements for Universe are more relaxed to make it easier for new developers, students and researchers to contribute, but will still be stricter than the requirements for Autoware.AI.
    • Any advanced features added to Universe that are useful to the wider Autoware community will be reviewed and considered for potential inclusion in the main Autoware Core codebase.

    This way, the primary requirement of having a stable and safe autonomous driving system can be achieved, whilst simultaneously enabling access to state-of-the-art features created by third-party contributors. For more details about the design of Autoware Core/Universe, refer to the Autoware concepts documentation page.

    "},{"location":"design/autoware-interfaces/","title":"Autoware interface design","text":""},{"location":"design/autoware-interfaces/#autoware-interface-design","title":"Autoware interface design","text":""},{"location":"design/autoware-interfaces/#abstract","title":"Abstract","text":"

    Autoware defines three categories of interfaces. The first one is Autoware AD API for operating the vehicle from outside the autonomous driving system such as the Fleet Management System (FMS) and Human Machine Interface (HMI) for operators or passengers. The second one is Autoware component interface for components to communicate with each other. The last one is the local interface used inside the component.

    "},{"location":"design/autoware-interfaces/#concept","title":"Concept","text":"
    • Applications can operate multiple and various vehicles in a common way.

    • Applications are not affected by version updates and implementation changes.

    • Developers only need to know the interface to add new features and hardware.

    "},{"location":"design/autoware-interfaces/#requirements","title":"Requirements","text":"

    Goals:

    • AD API provides functionality to create the following applications:
      • Drive the vehicle on the route or drive to the requested positions in order.
      • Operate vehicle behavior such as starting and stopping.
      • Display or announce the vehicle status to operators, passengers, and people around.
      • Control vehicle devices such as doors.
      • Monitor the vehicle or drive it manually.
    • AD API provides stable and long-term specifications. This enables unified access to all vehicles.
    • AD API hides differences in version and implementation and absorbs the impact of changes.
    • AD API has a default implementation and can be applied to some simple ODDs with options.
    • The AD API implementation is extensible with the third-party components as long as it meets the specifications.
    • The component interface provides stable and medium-term specifications. This makes it easier to add components.
    • The component interface clarifies the public and private parts of a component and improves maintainability.
    • The component interface is extensible with the third-party design to improve the sub-components' reusability.

    Non-goals:

    • AD API does not cover security. Use it with other reliable methods.
    • The component interface is just a specification, it does not include an implementation.
    "},{"location":"design/autoware-interfaces/#architecture","title":"Architecture","text":"

    The components of Autoware are connected via the component interface. Each component uses the interface to provide functionality and to access other components. AD API implementation is also a component. Since the functional elements required for AD API are defined as the component interface, other components do not need to consider AD API directly. Tools for evaluation and debugging, such as simulators, access both AD API and the component interface.

    The component interface has a hierarchical specification. The top-level architecture consists of some components. Each component has some options of the next-level architecture. Developers select one of them when implementing the component. The simplest next-level architecture is monolithic. This is an all-in-one and black box implementation, and is suitable for small group development, prototyping, and very complex functions. Others are arbitrary architecture consists of sub-components and have advantages for large group development. A sub-component can be combined with others that adopt the same architecture. Third parties can define and publish their own architecture and interface for open source development. It is desirable to propose them for standardization if they are sufficiently evaluated.

    "},{"location":"design/autoware-interfaces/#features","title":"Features","text":""},{"location":"design/autoware-interfaces/#communication-methods","title":"Communication methods","text":"

    As shown in the table below, interfaces are classified into four communication methods to define their behavior. Function Call is a request-response communication and is used for processing that requires immediate results. The others are publish-subscribe communication. Notification is used to process data that changes with some event, typically a callback. Streams handle continuously changing data. Reliable Stream expects all data to arrive without loss, Realtime Stream expects the latest data to arrive with low delay.

    Communication Method ROS Implementation Optional Implementation Function Call Service HTTP Notification Topic (reliable, transient_local) MQTT (QoS=2, retain) Reliable Stream Topic (reliable, volatile) MQTT (QoS=2) Realtime Stream Topic (best_effort, volatile) MQTT (QoS=0)

    These methods are provided as services or topics of ROS since Autoware is developed using ROS and mainly communicates with its packages. On the other hand, FMS and HMI are often implemented without ROS, Autoware is also expected to communicate with applications that do not use ROS. It is wasteful for each of these applications to have an adapter for Autoware, and a more suitable means of communication is required. HTTP and MQTT are suggested as additional options because these protocols are widely used and can substitute the behavior of services and topics. In that case, text formats such as JSON where field names are repeated in an array of objects, are inefficient and it is necessary to consider the serialization.

    "},{"location":"design/autoware-interfaces/#naming-convention","title":"Naming convention","text":"

    The name of the interface must be /<component name>/api/<interface name>, where <component name> is the name of the component. For an AD API component, omit this part and start with /api. The <interface name> is an arbitrary string separated by slashes. Note that this rule causes a restriction that the namespace api must not be used as a name other than AD API and the component interface.

    The following are examples of correct interface names for AD API and the component interface:

    • /api/autoware/state
    • /api/autoware/engage
    • /planning/api/route/set
    • /vehicle/api/status

    The following are examples of incorrect interface names for AD API and the component interface:

    • /ad_api/autoware/state
    • /autoware/engage
    • /planning/route/set/api
    • /vehicle/my_api/status
    "},{"location":"design/autoware-interfaces/#logging","title":"Logging","text":"

    It is recommended to log the interface for analysis of vehicle behavior. If logging is needed, rosbag is available for topics, and use logger in rclcpp or rclpy for services. Typically, create a wrapper for service and client classes that logs when a service is called.

    "},{"location":"design/autoware-interfaces/#restrictions","title":"Restrictions","text":"

    For each API, consider the restrictions such as following and describe them if necessary.

    Services:

    • response time
    • pre-condition
    • post-condition
    • execution order
    • concurrent execution

    Topics:

    • recommended delay range
    • maximum delay
    • recommended frequency range
    • minimum frequency
    • default frequency
    "},{"location":"design/autoware-interfaces/#data-structure","title":"Data structure","text":""},{"location":"design/autoware-interfaces/#data-type-definition","title":"Data type definition","text":"

    Do not share the types in AD API unless they are obviously the same to avoid changes in one API affecting another. Also, implementation-dependent types, including the component interface, should not be used in AD API for the same reason. Use the type in AD API in implementation, or create the same type and copy the data to convert the type.

    "},{"location":"design/autoware-interfaces/#constants-and-enumeration","title":"Constants and enumeration","text":"

    Since ROS don't support enumeration, use constants instead. The default value of type such as zero and empty string should not be used to detect that a variable is unassigned. Alternatively, assign it a dedicated name to indicate that it is undefined. If one type has multiple enumerations, comment on the correspondence between constants and variables. Do not use enumeration values directly, as assignments are subject to change when the version is updated.

    "},{"location":"design/autoware-interfaces/#time-stamp","title":"Time stamp","text":"

    Clarify what the timestamp indicates. for example, send time, measurement time, update time, etc. Consider having multiple timestamps if necessary. Use std_msgs/msg/Header when using ROS transform. Also consider whether the header is common to all data, independent for each data, or additional timestamp is required.

    "},{"location":"design/autoware-interfaces/#request-header","title":"Request header","text":"

    Currently, there is no required header.

    "},{"location":"design/autoware-interfaces/#response-status","title":"Response status","text":"

    The interfaces whose communication method is Function Call use a common response status to unify the error format. These interfaces should include a variable of ResponseStatus with the name status in the response. See autoware_adapi_v1_msgs/msg/ResponseStatus for details.

    "},{"location":"design/autoware-interfaces/#concerns-assumptions-and-limitations","title":"Concerns, assumptions and limitations","text":"
    • The applications use the version information provided by AD API to check compatibility. Unknown versions are also treated as available as long as the major versions match (excluding major version 0). Compatibility between AD API and the component interface is assumed to be maintained by the version management system.
    • If an unintended behavior of AD API is detected, the application should take appropriate action. Autoware tries to keep working as long as possible, but it is not guaranteed to be safe. Safety should be considered for the entire system, including the applications.
    "},{"location":"design/autoware-interfaces/ad-api/","title":"Autoware AD API","text":""},{"location":"design/autoware-interfaces/ad-api/#autoware-ad-api","title":"Autoware AD API","text":""},{"location":"design/autoware-interfaces/ad-api/#overview","title":"Overview","text":"

    Autoware AD API is the interface for operating the vehicle from outside the autonomous driving system. See here for the overall interface design of Autoware.

    "},{"location":"design/autoware-interfaces/ad-api/#user-stories","title":"User stories","text":"

    The user stories are service scenarios that AD API assumes. AD API is designed based on these scenarios. Each scenario is realized by a combination of use cases described later. If there are scenarios that cannot be covered, please discuss adding a user story.

    • Bus service
    • Taxi service
    "},{"location":"design/autoware-interfaces/ad-api/#use-cases","title":"Use cases","text":"

    Use cases are partial scenarios derived from the user story and generically designed. Service providers can combine these use cases to define user stories and check if AD API can be applied to their own scenarios.

    • Launch and terminate
    • Initialize the pose
    • Change the operation mode
    • Drive to the designated position
    • Get on and get off
    • Vehicle monitoring
    • Vehicle operation
    • System monitoring
    • Manual control
    "},{"location":"design/autoware-interfaces/ad-api/#features","title":"Features","text":"
    • Interface
    • Operation Mode
    • Routing
    • Localization
    • Motion
    • Planning
    • Perception
    • Fail-safe
    • Vehicle status
    • Vehicle doors
    • Cooperation
    • Heartbeat
    • Diagnostics
    • Manual control
    "},{"location":"design/autoware-interfaces/ad-api/release/","title":"Release notes","text":""},{"location":"design/autoware-interfaces/ad-api/release/#release-notes","title":"Release notes","text":""},{"location":"design/autoware-interfaces/ad-api/release/#v160-not-released","title":"v1.6.0 (Not released)","text":"
    • [Change] Fix communication method of /api/vehicle/status
    • [Change] Add options to the routing API
    • [Change] Add restrictions to /api/routing/clear_route
    • [Change] Add restrictions to /api/vehicle/doors/command
    "},{"location":"design/autoware-interfaces/ad-api/release/#v150","title":"v1.5.0","text":"
    • [New] Add /api/routing/change_route_points
    • [New] Add /api/routing/change_route
    "},{"location":"design/autoware-interfaces/ad-api/release/#v140","title":"v1.4.0","text":"
    • [New] Add /api/vehicle/status
    "},{"location":"design/autoware-interfaces/ad-api/release/#v130","title":"v1.3.0","text":"
    • [New] Add heartbeat API
    • [New] Add diagnostics API
    "},{"location":"design/autoware-interfaces/ad-api/release/#v120","title":"v1.2.0","text":"
    • [New] Add vehicle doors API
    • [Change] Add pull-over constants to /api/fail_safe/mrm_state
    "},{"location":"design/autoware-interfaces/ad-api/release/#v110","title":"v1.1.0","text":"
    • [New] Add /api/fail_safe/mrm_state
    • [New] Add /api/vehicle/dimensions
    • [New] Add /api/vehicle/kinematics
    • [Change] Add options to the routing API
    "},{"location":"design/autoware-interfaces/ad-api/release/#v100","title":"v1.0.0","text":"
    • [New] Add interface API
    • [New] Add localization API
    • [New] Add routing API
    • [New] Add operation mode API
    "},{"location":"design/autoware-interfaces/ad-api/features/cooperation/","title":"Cooperation","text":""},{"location":"design/autoware-interfaces/ad-api/features/cooperation/#cooperation","title":"Cooperation","text":""},{"location":"design/autoware-interfaces/ad-api/features/cooperation/#related-api","title":"Related API","text":"
    • /api/planning/velocity_factors
    • /api/planning/steering_factors
    • /api/planning/cooperation/set_commands
    • /api/planning/cooperation/set_policies
    • /api/planning/cooperation/get_policies
    "},{"location":"design/autoware-interfaces/ad-api/features/cooperation/#description","title":"Description","text":"

    Request to cooperate (RTC) is a feature that enables a human operator to support the decision in autonomous driving mode. Autoware usually drives the vehicle using its own decisions, but the operator may prefer to make their decisions in experiments and complex situations.

    The planning component manages each situation that requires decision as a scene. Each scene has an ID that doesn't change until the scene is completed or canceled. The operator can override the decision of the target scene using this ID. In practice, the user interface application can hides the specification of the ID and provides an abstracted interface to the operator.

    For example, in the situation in the diagram below, vehicle is expected to make two lane changes and turning left at the intersection. Therefore the planning component generates three scene instances for each required action, and each scene instance will wait for the decision to be made, in this case \"changing or keeping lane\" and \"turning left or waiting at the intersection\". Here Autoware decides not to change lanes a second time due to the obstacle, so the vehicle will stop there. However, operator could overwrite that decision through RTC function and force the lane change so that vehicle could reach to it's goal. Using RTC, the operator can override these decisions to continue driving the vehicle to the goal.

    "},{"location":"design/autoware-interfaces/ad-api/features/cooperation/#architecture","title":"Architecture","text":"

    Modules that support RTC have the operator decision and cooperation policy in addition to the module decision as shown below. These modules use the merged decision that is determined by these values when planning vehicle behavior. See decisions section for details of these values. The cooperation policy is used when there is no operator decision and has a default value set by the system settings. If the module supports RTC, these information are available in velocity factors or steering factors as cooperation status.

    "},{"location":"design/autoware-interfaces/ad-api/features/cooperation/#sequence","title":"Sequence","text":"

    This is an example sequence that overrides the scene decision to force a lane change. It is for the second scene in the diagram in the architecture section. Here let's assume the cooperation policy is set to optional, see the decisions section described later for details.

    1. A planning module creates a scene instance with unique ID when approaching a place where a lane change is needed.
    2. The scene instance generates the module decision from the current situation. In this case, the module decision is not to do a lane change due to the obstacle.
    3. The scene instance generates the merged decision. At this point, there is no operator decision yet, so it is based on the module decision.
    4. The scene instance plans the vehicle to keep the lane according to the merged decision.
    5. The scene instance sends a cooperation status.
    6. The operator receives the cooperation status.
    7. The operator sends a cooperation command to override the module decision and to do a lane change.
    8. The scene instance receives the cooperation command and update the operator decision.
    9. The scene instance updates the module decision from the current situation.
    10. The scene instance updates the merged decision. It is based on the operator decision received.
    11. The scene instance plans the vehicle to change the lane according to the merged decision.
    "},{"location":"design/autoware-interfaces/ad-api/features/cooperation/#decisions","title":"Decisions","text":"

    The merged decision is determined by the module decision, operator decision, and cooperation policy, each of which takes the value shown in the table below.

    Status Values merged decision deactivate, activate module decision deactivate, activate operator decision deactivate, activate, autonomous, none cooperation policy required, optional

    The meanings of these values are as follows. Note that the cooperation policy is common per module, so changing it will affect all scenes in the same module.

    Value Description deactivate An operator/module decision to plan vehicle behavior with priority on safety. activate An operator/module decision to plan vehicle behavior with priority on driving. autonomous An operator decision that follows the module decision. none An initial value for operator decision, indicating that there is no operator decision yet. required A policy that requires the operator decision to continue driving. optional A policy that does not require the operator decision to continue driving.

    The following flow is how the merged decision is determined.

    "},{"location":"design/autoware-interfaces/ad-api/features/cooperation/#examples","title":"Examples","text":"

    This is an example of cooperation for lane change module. The behaviors by the combination of decisions are as follows.

    Operator decision Policy Module decision Description deactivate - - The operator instructs to keep lane regardless the module decision. So the vehicle keeps the lane by the operator decision. activate - - The operator instructs to change lane regardless the module decision. So the vehicle changes the lane by the operator decision. autonomous - deactivate The operator instructs to follow the module decision. So the vehicle keeps the lane by the module decision. autonomous - activate The operator instructs to follow the module decision. So the vehicle changes the lane by the module decision. none required - The required policy is used because no operator instruction. So the vehicle keeps the lane by the cooperation policy. none optional deactivate The optional policy is used because no operator instruction. So the vehicle keeps the lane by the module decision. none optional activate The optional policy is used because no operator instruction. So the vehicle change the lane by the module decision."},{"location":"design/autoware-interfaces/ad-api/features/diagnostics/","title":"Diagnostics","text":""},{"location":"design/autoware-interfaces/ad-api/features/diagnostics/#diagnostics","title":"Diagnostics","text":""},{"location":"design/autoware-interfaces/ad-api/features/diagnostics/#related-api","title":"Related API","text":"
    • /api/system/diagnostics/struct
    • /api/system/diagnostics/status
    "},{"location":"design/autoware-interfaces/ad-api/features/diagnostics/#description","title":"Description","text":"

    This API provides a diagnostic graph consisting of error levels for functional units of Autoware. The system groups functions into arbitrary units depending on configuration and diagnoses their error levels. Each functional unit has dependencies, so the whole looks like a fault tree analysis (FTA). In practice, it becomes a directed acyclic graph (DAG) because multiple parents may share the same child. Below is an example of the diagnostics provided by this API. The path in the diagram is an arbitrary string that describes the functional unit, and the level is its error level. For error level, the same value as diagnostic_msgs/msg/DiagnosticStatus is used.

    The diagnostics data has static and dynamic parts, so the API provides these separately for efficiency. Below is an example of a message that corresponds to the above diagram. The static part of the diagnostic is published only once as the DiagGraphStruct that contains nodes and links. The links specify dependencies between nodes by index into an array of nodes. The dynamic part of the diagnostic is published periodically as DiagGraphStatus. The status has an array of nodes of the same length as the struct, with the same index representing the same functional unit.

    "},{"location":"design/autoware-interfaces/ad-api/features/fail-safe/","title":"Fail-safe","text":""},{"location":"design/autoware-interfaces/ad-api/features/fail-safe/#fail-safe","title":"Fail-safe","text":""},{"location":"design/autoware-interfaces/ad-api/features/fail-safe/#related-api","title":"Related API","text":"
    • /api/fail_safe/mrm_state
    "},{"location":"design/autoware-interfaces/ad-api/features/fail-safe/#description","title":"Description","text":"

    This API manages the behavior related to the abnormality of the vehicle. It provides the state of Request to Intervene (RTI), Minimal Risk Maneuver (MRM) and Minimal Risk Condition (MRC). As shown below, Autoware has the gate to switch between the command during normal operation and the command during abnormal operation. For safety, Autoware switches the operation to MRM when an abnormality is detected. Since the required behavior differs depending on the situation, MRM is implemented in various places as a specific mode in a normal module or as an independent module. The fail-safe module selects the behavior of MRM according to the abnormality and switches the gate output to that command.

    "},{"location":"design/autoware-interfaces/ad-api/features/fail-safe/#states","title":"States","text":"

    The MRM state indicates whether MRM is operating. This state also provides success or failure. Generally, MRM will switch to another behavior if it fails.

    State Description NONE MRM is not operating. OPERATING MRM is operating because an abnormality has been detected. SUCCEEDED MRM succeeded. The vehicle is in a safe condition. FAILED MRM failed. The vehicle is still in an unsafe condition."},{"location":"design/autoware-interfaces/ad-api/features/fail-safe/#behavior","title":"Behavior","text":"

    There is a dependency between MRM behaviors. For example, it switches from a comfortable stop to a emergency stop, but not the other way around. This is service dependent. Autoware supports the following transitions by default.

    State Description NONE MRM is not operating or is operating but no special behavior is required. COMFORTABLE_STOP The vehicle will stop quickly with a comfortable deceleration. EMERGENCY_STOP The vehicle will stop immediately with as much deceleration as possible. PULL_OVER The vehicle will stop after moving to the side of the road."},{"location":"design/autoware-interfaces/ad-api/features/heartbeat/","title":"Heartbeat","text":""},{"location":"design/autoware-interfaces/ad-api/features/heartbeat/#heartbeat","title":"Heartbeat","text":""},{"location":"design/autoware-interfaces/ad-api/features/heartbeat/#related-api","title":"Related API","text":"
    • /api/system/heartbeat
    "},{"location":"design/autoware-interfaces/ad-api/features/heartbeat/#description","title":"Description","text":"

    This API is used to check whether applications and Autoware are communicating properly. The message contains timestamp and sequence number to check for communication delays, order, and losses.

    "},{"location":"design/autoware-interfaces/ad-api/features/interface/","title":"Interface","text":""},{"location":"design/autoware-interfaces/ad-api/features/interface/#interface","title":"Interface","text":""},{"location":"design/autoware-interfaces/ad-api/features/interface/#related-api","title":"Related API","text":"
    • /api/interface/version
    "},{"location":"design/autoware-interfaces/ad-api/features/interface/#description","title":"Description","text":"

    This API provides the interface version of the set of AD APIs. It follows Semantic Versioning in order to provide an intuitive understanding of the changes between versions.

    "},{"location":"design/autoware-interfaces/ad-api/features/localization/","title":"Localization","text":""},{"location":"design/autoware-interfaces/ad-api/features/localization/#localization","title":"Localization","text":""},{"location":"design/autoware-interfaces/ad-api/features/localization/#related-api","title":"Related API","text":"
    • /api/localization/initialization_state
    • /api/localization/initialize
    "},{"location":"design/autoware-interfaces/ad-api/features/localization/#description","title":"Description","text":"

    This API manages the initialization of localization. Autoware requires a global pose as the initial guess for localization.

    "},{"location":"design/autoware-interfaces/ad-api/features/localization/#states","title":"States","text":"State Description UNINITIALIZED Localization is not initialized. Waiting for a global pose as the initial guess. INITIALIZING Localization is initializing. INITIALIZED Localization is initialized. Initialization can be requested again if necessary."},{"location":"design/autoware-interfaces/ad-api/features/manual-control/","title":"Manual control","text":""},{"location":"design/autoware-interfaces/ad-api/features/manual-control/#manual-control","title":"Manual control","text":""},{"location":"design/autoware-interfaces/ad-api/features/manual-control/#related-api","title":"Related API","text":"
    • /api/remote/control_mode/list
    • /api/remote/control_mode/select
    • /api/remote/control_mode/status
    • /api/remote/operator/status
    • /api/remote/command/pedal
    • /api/remote/command/acceleration
    • /api/remote/command/steering
    • /api/remote/command/gear
    • /api/remote/command/turn_indicators
    • /api/remote/command/hazard_lights
    • /api/local/control_mode/list
    • /api/local/control_mode/select
    • /api/local/control_mode/status
    • /api/local/operator/status
    • /api/local/command/pedal
    • /api/local/command/acceleration
    • /api/local/command/steering
    • /api/local/command/gear
    • /api/local/command/turn_indicators
    • /api/local/command/hazard_lights
    "},{"location":"design/autoware-interfaces/ad-api/features/manual-control/#description","title":"Description","text":"

    This API is used to manually control the vehicle, and provides the same interface for different operators: remote and local. For example, the local operator controls a vehicle without a driver's seat using a joystick, while the remote operator provides remote support when problems occur with autonomous driving. The command sent will be used when operation mode is remote or local.

    "},{"location":"design/autoware-interfaces/ad-api/features/manual-control/#operator-status","title":"Operator status","text":"

    The application needs to determine whether the operator is able to drive and send that information via the operator status API. If the operator is unable to continue driving during manual operation, Autoware will perform MRM to bring the vehicle to a safe state. For level 3 and below, the operator status is referenced even during autonomous driving.

    "},{"location":"design/autoware-interfaces/ad-api/features/manual-control/#control-mode","title":"Control mode","text":"

    Since there are multiple ways to control a vehicle, such as pedals or acceleration, the application must first select a control mode.

    Mode Description disabled This is the initial mode. When selected, all command APIs are unavailable. pedal This mode provides longitudinal control using the pedals. acceleration This mode provides longitudinal control using the target acceleration."},{"location":"design/autoware-interfaces/ad-api/features/manual-control/#commands","title":"Commands","text":"

    The commands available in each mode are as follows.

    Command disabled pedal acceleration pedal - \u2713 - acceleration - - \u2713 steering - \u2713 \u2713 gear - \u2713 \u2713 turn_indicators - \u2713 \u2713 hazard_lights - \u2713 \u2713"},{"location":"design/autoware-interfaces/ad-api/features/motion/","title":"Motion","text":""},{"location":"design/autoware-interfaces/ad-api/features/motion/#motion","title":"Motion","text":""},{"location":"design/autoware-interfaces/ad-api/features/motion/#related-api","title":"Related API","text":"
    • /api/motion/state
    • /api/motion/accept_start
    "},{"location":"design/autoware-interfaces/ad-api/features/motion/#description","title":"Description","text":"

    This API manages the current behavior of the vehicle. Applications can notify the vehicle behavior to the people around and visualize it for operator and passengers.

    "},{"location":"design/autoware-interfaces/ad-api/features/motion/#states","title":"States","text":"

    The motion state manages the stop and start of the vehicle. Once the vehicle has stopped, the state will be STOPPED. After this, when the vehicle tries to start (is still stopped), the state will be STARTING. In this state, calling the start API changes the state to MOVING and the vehicle starts. This mechanism can add processing such as announcements before the vehicle starts. Depending on the configuration, the state may transition directly from STOPPED to MOVING.

    State Description STOPPED The vehicle is stopped. STARTING The vehicle is stopped, but is trying to start. MOVING The vehicle is moving. BRAKING (T.B.D.) The vehicle is decelerating strongly."},{"location":"design/autoware-interfaces/ad-api/features/operation_mode/","title":"Operation mode","text":""},{"location":"design/autoware-interfaces/ad-api/features/operation_mode/#operation-mode","title":"Operation mode","text":""},{"location":"design/autoware-interfaces/ad-api/features/operation_mode/#related-api","title":"Related API","text":"
    • /api/operation_mode/state
    • /api/operation_mode/change_to_autonomous
    • /api/operation_mode/change_to_stop
    • /api/operation_mode/change_to_local
    • /api/operation_mode/change_to_remote
    • /api/operation_mode/enable_autoware_control
    • /api/operation_mode/disable_autoware_control
    "},{"location":"design/autoware-interfaces/ad-api/features/operation_mode/#description","title":"Description","text":"

    As shown below, Autoware assumes that the vehicle interface has two modes, Autoware control and direct control. In direct control mode, the vehicle is operated using devices such as steering and pedals. If the vehicle does not support direct control mode, it is always treated as Autoware control mode. Autoware control mode has four operation modes.

    Mode Description Stop Keep the vehicle stopped. Autonomous Autonomously control the vehicle. Local Manually control the vehicle from nearby with some device such as a joystick. Remote Manually control the vehicle from a web application on the cloud.

    "},{"location":"design/autoware-interfaces/ad-api/features/operation_mode/#states","title":"States","text":""},{"location":"design/autoware-interfaces/ad-api/features/operation_mode/#autoware-control-flag","title":"Autoware control flag","text":"

    The flag is_autoware_control_enabled indicates if the vehicle is controlled by Autoware. The enable and disable APIs can be used if the control can be switched by software. These APIs will always fail if the vehicle does not support mode switching or is switched by hardware.

    "},{"location":"design/autoware-interfaces/ad-api/features/operation_mode/#operation-mode-and-change-flags","title":"Operation mode and change flags","text":"

    The state operation_mode indicates what command is used when Autoware control is enabled. The flags change_to_* can be used to check if it is possible to transition to each mode.

    "},{"location":"design/autoware-interfaces/ad-api/features/operation_mode/#transition-flag","title":"Transition flag","text":"

    Since Autoware may not be able to guarantee safety, such as switching to autonomous mode during overspeed. There is the flag is_in_transition for this situation and it will be true when changing modes. The operator who changed the mode should ensure safety while this flag is true. The flag will be false when the mode change is complete.

    "},{"location":"design/autoware-interfaces/ad-api/features/perception/","title":"Perception","text":""},{"location":"design/autoware-interfaces/ad-api/features/perception/#perception","title":"Perception","text":""},{"location":"design/autoware-interfaces/ad-api/features/perception/#related-api","title":"Related API","text":"
    • /api/perception/objects
    "},{"location":"design/autoware-interfaces/ad-api/features/perception/#description","title":"Description","text":"

    API for perception related topic.

    "},{"location":"design/autoware-interfaces/ad-api/features/planning-factors/","title":"Planning factors","text":""},{"location":"design/autoware-interfaces/ad-api/features/planning-factors/#planning-factors","title":"Planning factors","text":""},{"location":"design/autoware-interfaces/ad-api/features/planning-factors/#related-api","title":"Related API","text":"
    • /api/planning/velocity_factors
    • /api/planning/steering_factors
    "},{"location":"design/autoware-interfaces/ad-api/features/planning-factors/#description","title":"Description","text":"

    This API manages the planned behavior of the vehicle. Applications can notify the vehicle behavior to the people around and visualize it for operator and passengers.

    "},{"location":"design/autoware-interfaces/ad-api/features/planning-factors/#velocity-factors","title":"Velocity factors","text":"

    The velocity factors is an array of information on the behavior that the vehicle stops or slows down. Each factor has a behavior type which is described below. Some behavior types have sequence and details as additional information.

    Behavior Description surrounding-obstacle There are obstacles immediately around the vehicle. route-obstacle There are obstacles along the route ahead. intersection There are obstacles in other lanes in the path. crosswalk There are obstacles on the crosswalk. rear-check There are obstacles behind that would be in a human driver's blind spot. user-defined-attention-area There are obstacles in the predefined attention area. no-stopping-area There is not enough space beyond the no stopping area. stop-sign A stop by a stop sign. traffic-signal A stop by a traffic signal. v2x-gate-area A stop by a gate area. It has enter and leave as sequences and v2x type as details. merge A stop before merging lanes. sidewalk A stop before crossing the sidewalk. lane-change A lane change. avoidance A path change to avoid an obstacle in the current lane. emergency-operation A stop by emergency instruction from the operator.

    Each factor also provides status, poses in the base link frame, and distance from that pose. As the vehicle approaches the stop position, this factor appears with a status of APPROACHING. And when the vehicle reaches that position and stops, the status will be STOPPED. The pose indicates the stop position, or the base link if the stop position cannot be calculated.

    "},{"location":"design/autoware-interfaces/ad-api/features/planning-factors/#steering-factors","title":"Steering factors","text":"

    The steering factors is an array of information on the maneuver that requires use of turn indicators, such as turning left or right. Each factor has a behavior type which is described below and steering direction. Some behavior types have sequence and details as additional information.

    Behavior Description intersection A turning left or right at an intersection. lane-change A lane change. avoidance A path change to avoid an obstacle. It has a sequence of change and return. start-planner T.B.D. goal-planner T.B.D. emergency-operation A path change by emergency instruction from the operator.

    Each factor also provides status, poses in the base link frame, and distances from that poses. As the vehicle approaches the position to start steering, this factor appears with a status of APPROACHING. And when the vehicle reaches that position, the status will be TURNING. The poses indicate the start and end position of the section where the status is TURNING.

    In cases such as lane change and avoidance, the vehicle will start steering at any position in the range depending on the situation. For these types, the section where the status is TURNING will be updated dynamically and the poses will follow that.

    "},{"location":"design/autoware-interfaces/ad-api/features/routing/","title":"Routing","text":""},{"location":"design/autoware-interfaces/ad-api/features/routing/#routing","title":"Routing","text":""},{"location":"design/autoware-interfaces/ad-api/features/routing/#related-api","title":"Related API","text":"
    • /api/routing/state
    • /api/routing/route
    • /api/routing/clear_route
    • /api/routing/set_route_points
    • /api/routing/set_route
    • /api/routing/change_route_points
    • /api/routing/change_route
    "},{"location":"design/autoware-interfaces/ad-api/features/routing/#description","title":"Description","text":"

    This API manages destination and waypoints. Note that waypoints are not like stops and just points passing through. In other words, Autoware does not support the route with multiple stops, the application needs to split it up and switch them. There are two ways to set the route. The one is a generic method that uses pose, another is a map-dependent.

    "},{"location":"design/autoware-interfaces/ad-api/features/routing/#states","title":"States","text":"State Description UNSET The route is not set. Waiting for a route request. SET The route is set. ARRIVED The vehicle has arrived at the destination. CHANGING Trying to change the route."},{"location":"design/autoware-interfaces/ad-api/features/routing/#options","title":"Options","text":"

    The route set and change APIs have route options that allow applications to choose several behaviors regarding route planning. See the sections below for supported options and details.

    "},{"location":"design/autoware-interfaces/ad-api/features/routing/#allow_goal_modification","title":"allow_goal_modification","text":"

    [v1.1.0] Autoware tries to look for an alternate goal when goal is unreachable (e.g., when there is an obstacle on the given goal). When setting a route from the API, applications can choose whether they allow Autoware to adjust goal pose in such situation. When set false, Autoware may get stuck until the given goal becomes reachable.

    "},{"location":"design/autoware-interfaces/ad-api/features/routing/#allow_while_using_route","title":"allow_while_using_route","text":"

    [v1.6.0] This option only affects the route change APIs. Autoware accepts new route even while the vehicle is using the current route. The APIs fail if it cannot safely transition to new route. When set false, the APIs always fail when the vehicle is using the route.

    "},{"location":"design/autoware-interfaces/ad-api/features/vehicle-doors/","title":"Vehicle doors","text":""},{"location":"design/autoware-interfaces/ad-api/features/vehicle-doors/#vehicle-doors","title":"Vehicle doors","text":""},{"location":"design/autoware-interfaces/ad-api/features/vehicle-doors/#related-api","title":"Related API","text":"
    • /api/vehicle/doors/layout
    • /api/vehicle/doors/status
    • /api/vehicle/doors/command
    "},{"location":"design/autoware-interfaces/ad-api/features/vehicle-doors/#description","title":"Description","text":"

    This feature is available if the vehicle provides a software interface for the doors. It can be used to create user interfaces for passengers or to control sequences at bus stops.

    "},{"location":"design/autoware-interfaces/ad-api/features/vehicle-doors/#layout","title":"Layout","text":"

    Each door in a vehicle is assigned an array index. This assignment is vehicle dependent. The layout API returns this information. The description field is a string to display in the user interface, etc. This is an arbitrary string and is not recommended to use for processing in applications. Use the roles field to know doors for getting on and off. Below is an example of the information returned by the layout API.

    Index Description Roles 0 front right - 1 front left GET_ON 2 rear right GET_OFF 3 rear left GET_ON, GET_OFF"},{"location":"design/autoware-interfaces/ad-api/features/vehicle-doors/#status","title":"Status","text":"

    The status API provides an array of door status. This array order is consistent with the layout API.

    "},{"location":"design/autoware-interfaces/ad-api/features/vehicle-doors/#control","title":"Control","text":"

    Use the command API to control doors. Unlike the status and layout APIs, array index do not correspond to doors. The command has a field to specify the target door index.

    "},{"location":"design/autoware-interfaces/ad-api/features/vehicle-status/","title":"Vehicle status","text":""},{"location":"design/autoware-interfaces/ad-api/features/vehicle-status/#vehicle-status","title":"Vehicle status","text":""},{"location":"design/autoware-interfaces/ad-api/features/vehicle-status/#related-api","title":"Related API","text":"
    • /api/vehicle/kinematics
    • /api/vehicle/status
    • /api/vehicle/dimensions
    "},{"location":"design/autoware-interfaces/ad-api/features/vehicle-status/#kinematics","title":"Kinematics","text":"

    This is an estimate of the vehicle kinematics. The vehicle position is necessary for applications to schedule dispatches. Also, using velocity and acceleration, applications can find vehicles that need operator assistance, such as stuck or brake suddenly.

    "},{"location":"design/autoware-interfaces/ad-api/features/vehicle-status/#status","title":"Status","text":"

    This is the status provided by the vehicle. The indicators and steering are mainly used for visualization and remote control. The remaining energy can be also used for vehicle scheduling.

    "},{"location":"design/autoware-interfaces/ad-api/features/vehicle-status/#dimensions","title":"Dimensions","text":"

    The vehicle dimensions are used to know the actual distance between the vehicle and objects because the vehicle position in kinematics is the coordinates of the base link. This is necessary for visualization when supporting vehicles remotely.

    "},{"location":"design/autoware-interfaces/ad-api/list/","title":"List of Autoware AD API","text":""},{"location":"design/autoware-interfaces/ad-api/list/#list-of-autoware-ad-api","title":"List of Autoware AD API","text":"API Release Method /api/fail_safe/mrm_state v1.1.0 notification /api/interface/version v1.0.0 function call /api/local/command/acceleration not released realtime stream /api/local/command/gear not released notification /api/local/command/hazard_lights not released notification /api/local/command/pedal not released realtime stream /api/local/command/steering not released realtime stream /api/local/command/turn_indicators not released notification /api/local/control_mode/list not released function call /api/local/control_mode/select not released function call /api/local/control_mode/status not released notification /api/local/operator/status not released realtime stream /api/localization/initialization_state v1.0.0 notification /api/localization/initialize v1.0.0 function call /api/motion/accept_start not released function call /api/motion/state not released notification /api/operation_mode/change_to_autonomous v1.0.0 function call /api/operation_mode/change_to_local v1.0.0 function call /api/operation_mode/change_to_remote v1.0.0 function call /api/operation_mode/change_to_stop v1.0.0 function call /api/operation_mode/disable_autoware_control v1.0.0 function call /api/operation_mode/enable_autoware_control v1.0.0 function call /api/operation_mode/state v1.0.0 notification /api/perception/objects not released realtime stream /api/planning/cooperation/get_policies not released function call /api/planning/cooperation/set_commands not released function call /api/planning/cooperation/set_policies not released function call /api/planning/steering_factors not released realtime stream /api/planning/velocity_factors not released realtime stream /api/remote/command/acceleration not released realtime stream /api/remote/command/gear not released notification /api/remote/command/hazard_lights not released notification /api/remote/command/pedal not released realtime stream /api/remote/command/steering not released realtime stream /api/remote/command/turn_indicators not released notification /api/remote/control_mode/list not released function call /api/remote/control_mode/select not released function call /api/remote/control_mode/status not released notification /api/remote/operator/status not released realtime stream /api/routing/change_route v1.5.0 function call /api/routing/change_route_points v1.5.0 function call /api/routing/clear_route v1.0.0 function call /api/routing/route v1.0.0 notification /api/routing/set_route v1.0.0 function call /api/routing/set_route_points v1.0.0 function call /api/routing/state v1.0.0 notification /api/system/diagnostics/status v1.3.0 realtime stream /api/system/diagnostics/struct v1.3.0 notification /api/system/heartbeat v1.3.0 realtime stream /api/vehicle/dimensions v1.1.0 function call /api/vehicle/doors/command v1.2.0 function call /api/vehicle/doors/layout v1.2.0 function call /api/vehicle/doors/status v1.2.0 notification /api/vehicle/kinematics v1.1.0 realtime stream /api/vehicle/status v1.4.0 realtime stream"},{"location":"design/autoware-interfaces/ad-api/list/api/fail_safe/mrm_state/","title":"/api/fail_safe/mrm_state","text":""},{"location":"design/autoware-interfaces/ad-api/list/api/fail_safe/mrm_state/#apifail_safemrm_state","title":"/api/fail_safe/mrm_state","text":""},{"location":"design/autoware-interfaces/ad-api/list/api/fail_safe/mrm_state/#status","title":"Status","text":"
    • Latest Version: v1.1.0
    • Method: notification
    • Data Type: autoware_adapi_v1_msgs/msg/MrmState
    "},{"location":"design/autoware-interfaces/ad-api/list/api/fail_safe/mrm_state/#description","title":"Description","text":"

    Get the MRM state. For details, see the fail-safe.

    "},{"location":"design/autoware-interfaces/ad-api/list/api/fail_safe/mrm_state/#message","title":"Message","text":"Name Type Description state uint16 The state of MRM operation. behavior uint16 The currently selected behavior of MRM."},{"location":"design/autoware-interfaces/ad-api/list/api/interface/version/","title":"/api/interface/version","text":""},{"location":"design/autoware-interfaces/ad-api/list/api/interface/version/#apiinterfaceversion","title":"/api/interface/version","text":""},{"location":"design/autoware-interfaces/ad-api/list/api/interface/version/#status","title":"Status","text":"
    • Latest Version: v1.0.0
    • Method: function call
    • Data Type: autoware_adapi_version_msgs/srv/InterfaceVersion
    "},{"location":"design/autoware-interfaces/ad-api/list/api/interface/version/#description","title":"Description","text":"

    Get the interface version. The version follows Semantic Versioning.

    "},{"location":"design/autoware-interfaces/ad-api/list/api/interface/version/#request","title":"Request","text":"

    None

    "},{"location":"design/autoware-interfaces/ad-api/list/api/interface/version/#response","title":"Response","text":"Name Type Description major uint16 major version minor uint16 minor version patch uint16 patch version"},{"location":"design/autoware-interfaces/ad-api/list/api/local/command/acceleration/","title":"/api/local/command/acceleration","text":""},{"location":"design/autoware-interfaces/ad-api/list/api/local/command/acceleration/#apilocalcommandacceleration","title":"/api/local/command/acceleration","text":""},{"location":"design/autoware-interfaces/ad-api/list/api/local/command/acceleration/#status","title":"Status","text":"
    • Latest Version: not released
    • Method: realtime stream
    • Data Type: autoware_adapi_v1_msgs/msg/AccelerationCommand
    "},{"location":"design/autoware-interfaces/ad-api/list/api/local/command/acceleration/#description","title":"Description","text":"

    Sends acceleration command used in local operation mode. To use this API, select the corresponding mode as described in manual control.

    "},{"location":"design/autoware-interfaces/ad-api/list/api/local/command/acceleration/#message","title":"Message","text":"Name Type Description stamp builtin_interfaces/msg/Time Timestamp when this message was sent. acceleration float32 Target acceleration [m/s^2]."},{"location":"design/autoware-interfaces/ad-api/list/api/local/command/gear/","title":"/api/local/command/gear","text":""},{"location":"design/autoware-interfaces/ad-api/list/api/local/command/gear/#apilocalcommandgear","title":"/api/local/command/gear","text":""},{"location":"design/autoware-interfaces/ad-api/list/api/local/command/gear/#status","title":"Status","text":"
    • Latest Version: not released
    • Method: notification
    • Data Type: autoware_adapi_v1_msgs/msg/GearCommand
    "},{"location":"design/autoware-interfaces/ad-api/list/api/local/command/gear/#description","title":"Description","text":"

    Sends gear command used in local operation mode. To use this API, select the corresponding mode as described in manual control.

    "},{"location":"design/autoware-interfaces/ad-api/list/api/local/command/gear/#message","title":"Message","text":"Name Type Description stamp builtin_interfaces/msg/Time Timestamp when this message was sent. command autoware_adapi_v1_msgs/msg/Gear Target gear status."},{"location":"design/autoware-interfaces/ad-api/list/api/local/command/hazard_lights/","title":"/api/local/command/hazard_lights","text":""},{"location":"design/autoware-interfaces/ad-api/list/api/local/command/hazard_lights/#apilocalcommandhazard_lights","title":"/api/local/command/hazard_lights","text":""},{"location":"design/autoware-interfaces/ad-api/list/api/local/command/hazard_lights/#status","title":"Status","text":"
    • Latest Version: not released
    • Method: notification
    • Data Type: autoware_adapi_v1_msgs/msg/HazardLightsCommand
    "},{"location":"design/autoware-interfaces/ad-api/list/api/local/command/hazard_lights/#description","title":"Description","text":"

    Sends hazard lights command used in local operation mode. To use this API, select the corresponding mode as described in manual control.

    "},{"location":"design/autoware-interfaces/ad-api/list/api/local/command/hazard_lights/#message","title":"Message","text":"Name Type Description stamp builtin_interfaces/msg/Time Timestamp when this message was sent. command autoware_adapi_v1_msgs/msg/HazardLights Target hazard lights status."},{"location":"design/autoware-interfaces/ad-api/list/api/local/command/pedal/","title":"/api/local/command/pedal","text":""},{"location":"design/autoware-interfaces/ad-api/list/api/local/command/pedal/#apilocalcommandpedal","title":"/api/local/command/pedal","text":""},{"location":"design/autoware-interfaces/ad-api/list/api/local/command/pedal/#status","title":"Status","text":"
    • Latest Version: not released
    • Method: realtime stream
    • Data Type: autoware_adapi_v1_msgs/msg/PedalCommand
    "},{"location":"design/autoware-interfaces/ad-api/list/api/local/command/pedal/#description","title":"Description","text":"

    Sends pedal command used in local operation mode. The pedal value is the ratio with the maximum pedal depression being 1.0. To use this API, select the corresponding mode as described in manual control.

    "},{"location":"design/autoware-interfaces/ad-api/list/api/local/command/pedal/#message","title":"Message","text":"Name Type Description stamp builtin_interfaces/msg/Time Timestamp when this message was sent. accelerator float32 Target accelerator pedal ratio. brake float32 Target brake pedal ratio."},{"location":"design/autoware-interfaces/ad-api/list/api/local/command/steering/","title":"/api/local/command/steering","text":""},{"location":"design/autoware-interfaces/ad-api/list/api/local/command/steering/#apilocalcommandsteering","title":"/api/local/command/steering","text":""},{"location":"design/autoware-interfaces/ad-api/list/api/local/command/steering/#status","title":"Status","text":"
    • Latest Version: not released
    • Method: realtime stream
    • Data Type: autoware_adapi_v1_msgs/msg/SteeringCommand
    "},{"location":"design/autoware-interfaces/ad-api/list/api/local/command/steering/#description","title":"Description","text":"

    Send steering command to this API. To use this API, select the corresponding mode as described in manual control.

    "},{"location":"design/autoware-interfaces/ad-api/list/api/local/command/steering/#message","title":"Message","text":"Name Type Description stamp builtin_interfaces/msg/Time Timestamp when this message was sent. steering_tire_angle float32 Target steering tire angle [rad]."},{"location":"design/autoware-interfaces/ad-api/list/api/local/command/turn_indicators/","title":"/api/local/command/turn_indicators","text":""},{"location":"design/autoware-interfaces/ad-api/list/api/local/command/turn_indicators/#apilocalcommandturn_indicators","title":"/api/local/command/turn_indicators","text":""},{"location":"design/autoware-interfaces/ad-api/list/api/local/command/turn_indicators/#status","title":"Status","text":"
    • Latest Version: not released
    • Method: notification
    • Data Type: autoware_adapi_v1_msgs/msg/TurnIndicatorsCommand
    "},{"location":"design/autoware-interfaces/ad-api/list/api/local/command/turn_indicators/#description","title":"Description","text":"

    Sends turn indicators command used in local operation mode. To use this API, select the corresponding mode as described in manual control.

    "},{"location":"design/autoware-interfaces/ad-api/list/api/local/command/turn_indicators/#message","title":"Message","text":"Name Type Description stamp builtin_interfaces/msg/Time Timestamp when this message was sent. command autoware_adapi_v1_msgs/msg/TurnIndicators Target turn indicators status."},{"location":"design/autoware-interfaces/ad-api/list/api/local/control_mode/list/","title":"/api/local/control_mode/list","text":""},{"location":"design/autoware-interfaces/ad-api/list/api/local/control_mode/list/#apilocalcontrol_modelist","title":"/api/local/control_mode/list","text":""},{"location":"design/autoware-interfaces/ad-api/list/api/local/control_mode/list/#status","title":"Status","text":"
    • Latest Version: not released
    • Method: function call
    • Data Type: autoware_adapi_v1_msgs/srv/ListManualControlMode
    "},{"location":"design/autoware-interfaces/ad-api/list/api/local/control_mode/list/#description","title":"Description","text":"

    List the available manual control modes as described in manual control. The disabled mode is not included in the available modes.

    "},{"location":"design/autoware-interfaces/ad-api/list/api/local/control_mode/list/#request","title":"Request","text":"

    None

    "},{"location":"design/autoware-interfaces/ad-api/list/api/local/control_mode/list/#response","title":"Response","text":"Name Type Description status autoware_adapi_v1_msgs/msg/ResponseStatus response status modes autoware_adapi_v1_msgs/msg/ManualControlMode[] List of available modes."},{"location":"design/autoware-interfaces/ad-api/list/api/local/control_mode/select/","title":"/api/local/control_mode/select","text":""},{"location":"design/autoware-interfaces/ad-api/list/api/local/control_mode/select/#apilocalcontrol_modeselect","title":"/api/local/control_mode/select","text":""},{"location":"design/autoware-interfaces/ad-api/list/api/local/control_mode/select/#status","title":"Status","text":"
    • Latest Version: not released
    • Method: function call
    • Data Type: autoware_adapi_v1_msgs/srv/SelectManualControlMode
    "},{"location":"design/autoware-interfaces/ad-api/list/api/local/control_mode/select/#description","title":"Description","text":"

    Selects the manual control mode as described in manual control. This API fails while operation mode is local.

    "},{"location":"design/autoware-interfaces/ad-api/list/api/local/control_mode/select/#request","title":"Request","text":"Name Type Description mode autoware_adapi_v1_msgs/msg/ManualControlMode The manual control mode to be used."},{"location":"design/autoware-interfaces/ad-api/list/api/local/control_mode/select/#response","title":"Response","text":"Name Type Description status autoware_adapi_v1_msgs/msg/ResponseStatus response status"},{"location":"design/autoware-interfaces/ad-api/list/api/local/control_mode/status/","title":"/api/local/control_mode/status","text":""},{"location":"design/autoware-interfaces/ad-api/list/api/local/control_mode/status/#apilocalcontrol_modestatus","title":"/api/local/control_mode/status","text":""},{"location":"design/autoware-interfaces/ad-api/list/api/local/control_mode/status/#status","title":"Status","text":"
    • Latest Version: not released
    • Method: notification
    • Data Type: autoware_adapi_v1_msgs/msg/ManualControlModeStatus
    "},{"location":"design/autoware-interfaces/ad-api/list/api/local/control_mode/status/#description","title":"Description","text":"

    Get the current manual operation mode.

    "},{"location":"design/autoware-interfaces/ad-api/list/api/local/control_mode/status/#message","title":"Message","text":"Name Type Description stamp builtin_interfaces/msg/Time Timestamp when this message was sent."},{"location":"design/autoware-interfaces/ad-api/list/api/local/operator/status/","title":"/api/local/operator/status","text":""},{"location":"design/autoware-interfaces/ad-api/list/api/local/operator/status/#apilocaloperatorstatus","title":"/api/local/operator/status","text":""},{"location":"design/autoware-interfaces/ad-api/list/api/local/operator/status/#status","title":"Status","text":"
    • Latest Version: not released
    • Method: realtime stream
    • Data Type: autoware_adapi_v1_msgs/msg/ManualOperatorStatus
    "},{"location":"design/autoware-interfaces/ad-api/list/api/local/operator/status/#description","title":"Description","text":"

    The application needs to determine whether the operator is able to drive and send that information via this API. For details, see the manual control.

    "},{"location":"design/autoware-interfaces/ad-api/list/api/local/operator/status/#message","title":"Message","text":"Name Type Description stamp builtin_interfaces/msg/Time Timestamp when this message was sent. ready bool Whether the operator is able to continue driving."},{"location":"design/autoware-interfaces/ad-api/list/api/localization/initialization_state/","title":"/api/localization/initialization_state","text":""},{"location":"design/autoware-interfaces/ad-api/list/api/localization/initialization_state/#apilocalizationinitialization_state","title":"/api/localization/initialization_state","text":""},{"location":"design/autoware-interfaces/ad-api/list/api/localization/initialization_state/#status","title":"Status","text":"
    • Latest Version: v1.0.0
    • Method: notification
    • Data Type: autoware_adapi_v1_msgs/msg/LocalizationInitializationState
    "},{"location":"design/autoware-interfaces/ad-api/list/api/localization/initialization_state/#description","title":"Description","text":"

    Get the initialization state of localization. For details, see the localization.

    "},{"location":"design/autoware-interfaces/ad-api/list/api/localization/initialization_state/#message","title":"Message","text":"Name Type Description state uint16 A value of the localization initialization state."},{"location":"design/autoware-interfaces/ad-api/list/api/localization/initialize/","title":"/api/localization/initialize","text":""},{"location":"design/autoware-interfaces/ad-api/list/api/localization/initialize/#apilocalizationinitialize","title":"/api/localization/initialize","text":""},{"location":"design/autoware-interfaces/ad-api/list/api/localization/initialize/#status","title":"Status","text":"
    • Latest Version: v1.0.0
    • Method: function call
    • Data Type: autoware_adapi_v1_msgs/srv/InitializeLocalization
    "},{"location":"design/autoware-interfaces/ad-api/list/api/localization/initialize/#description","title":"Description","text":"

    Request to initialize localization. For details, see the localization.

    "},{"location":"design/autoware-interfaces/ad-api/list/api/localization/initialize/#request","title":"Request","text":"Name Type Description pose geometry_msgs/msg/PoseWithCovarianceStamped[<=1] A global pose as the initial guess. If omitted, the GNSS pose will be used."},{"location":"design/autoware-interfaces/ad-api/list/api/localization/initialize/#response","title":"Response","text":"Name Type Description status autoware_adapi_v1_msgs/msg/ResponseStatus response status"},{"location":"design/autoware-interfaces/ad-api/list/api/motion/accept_start/","title":"/api/motion/accept_start","text":""},{"location":"design/autoware-interfaces/ad-api/list/api/motion/accept_start/#apimotionaccept_start","title":"/api/motion/accept_start","text":""},{"location":"design/autoware-interfaces/ad-api/list/api/motion/accept_start/#status","title":"Status","text":"
    • Latest Version: not released
    • Method: function call
    • Data Type: autoware_adapi_v1_msgs/srv/AcceptStart
    "},{"location":"design/autoware-interfaces/ad-api/list/api/motion/accept_start/#description","title":"Description","text":"

    Accept the vehicle to start. This API can be used when the motion state is STARTING.

    "},{"location":"design/autoware-interfaces/ad-api/list/api/motion/accept_start/#request","title":"Request","text":"

    None

    "},{"location":"design/autoware-interfaces/ad-api/list/api/motion/accept_start/#response","title":"Response","text":"Name Type Description status autoware_adapi_v1_msgs/msg/ResponseStatus response status"},{"location":"design/autoware-interfaces/ad-api/list/api/motion/state/","title":"/api/motion/state","text":""},{"location":"design/autoware-interfaces/ad-api/list/api/motion/state/#apimotionstate","title":"/api/motion/state","text":""},{"location":"design/autoware-interfaces/ad-api/list/api/motion/state/#status","title":"Status","text":"
    • Latest Version: not released
    • Method: notification
    • Data Type: autoware_adapi_v1_msgs/msg/MotionState
    "},{"location":"design/autoware-interfaces/ad-api/list/api/motion/state/#description","title":"Description","text":"

    Get the motion state. For details, see the motion state.

    "},{"location":"design/autoware-interfaces/ad-api/list/api/motion/state/#message","title":"Message","text":"Name Type Description state uint16 A value of the motion state."},{"location":"design/autoware-interfaces/ad-api/list/api/operation_mode/change_to_autonomous/","title":"/api/operation_mode/change_to_autonomous","text":""},{"location":"design/autoware-interfaces/ad-api/list/api/operation_mode/change_to_autonomous/#apioperation_modechange_to_autonomous","title":"/api/operation_mode/change_to_autonomous","text":""},{"location":"design/autoware-interfaces/ad-api/list/api/operation_mode/change_to_autonomous/#status","title":"Status","text":"
    • Latest Version: v1.0.0
    • Method: function call
    • Data Type: autoware_adapi_v1_msgs/srv/ChangeOperationMode
    "},{"location":"design/autoware-interfaces/ad-api/list/api/operation_mode/change_to_autonomous/#description","title":"Description","text":"

    Change the operation mode to autonomous. For details, see the operation mode.

    "},{"location":"design/autoware-interfaces/ad-api/list/api/operation_mode/change_to_autonomous/#request","title":"Request","text":"

    None

    "},{"location":"design/autoware-interfaces/ad-api/list/api/operation_mode/change_to_autonomous/#response","title":"Response","text":"Name Type Description status autoware_adapi_v1_msgs/msg/ResponseStatus response status"},{"location":"design/autoware-interfaces/ad-api/list/api/operation_mode/change_to_local/","title":"/api/operation_mode/change_to_local","text":""},{"location":"design/autoware-interfaces/ad-api/list/api/operation_mode/change_to_local/#apioperation_modechange_to_local","title":"/api/operation_mode/change_to_local","text":""},{"location":"design/autoware-interfaces/ad-api/list/api/operation_mode/change_to_local/#status","title":"Status","text":"
    • Latest Version: v1.0.0
    • Method: function call
    • Data Type: autoware_adapi_v1_msgs/srv/ChangeOperationMode
    "},{"location":"design/autoware-interfaces/ad-api/list/api/operation_mode/change_to_local/#description","title":"Description","text":"

    Change the operation mode to local. For details, see the operation mode.

    "},{"location":"design/autoware-interfaces/ad-api/list/api/operation_mode/change_to_local/#request","title":"Request","text":"

    None

    "},{"location":"design/autoware-interfaces/ad-api/list/api/operation_mode/change_to_local/#response","title":"Response","text":"Name Type Description status autoware_adapi_v1_msgs/msg/ResponseStatus response status"},{"location":"design/autoware-interfaces/ad-api/list/api/operation_mode/change_to_remote/","title":"/api/operation_mode/change_to_remote","text":""},{"location":"design/autoware-interfaces/ad-api/list/api/operation_mode/change_to_remote/#apioperation_modechange_to_remote","title":"/api/operation_mode/change_to_remote","text":""},{"location":"design/autoware-interfaces/ad-api/list/api/operation_mode/change_to_remote/#status","title":"Status","text":"
    • Latest Version: v1.0.0
    • Method: function call
    • Data Type: autoware_adapi_v1_msgs/srv/ChangeOperationMode
    "},{"location":"design/autoware-interfaces/ad-api/list/api/operation_mode/change_to_remote/#description","title":"Description","text":"

    Change the operation mode to remote. For details, see the operation mode.

    "},{"location":"design/autoware-interfaces/ad-api/list/api/operation_mode/change_to_remote/#request","title":"Request","text":"

    None

    "},{"location":"design/autoware-interfaces/ad-api/list/api/operation_mode/change_to_remote/#response","title":"Response","text":"Name Type Description status autoware_adapi_v1_msgs/msg/ResponseStatus response status"},{"location":"design/autoware-interfaces/ad-api/list/api/operation_mode/change_to_stop/","title":"/api/operation_mode/change_to_stop","text":""},{"location":"design/autoware-interfaces/ad-api/list/api/operation_mode/change_to_stop/#apioperation_modechange_to_stop","title":"/api/operation_mode/change_to_stop","text":""},{"location":"design/autoware-interfaces/ad-api/list/api/operation_mode/change_to_stop/#status","title":"Status","text":"
    • Latest Version: v1.0.0
    • Method: function call
    • Data Type: autoware_adapi_v1_msgs/srv/ChangeOperationMode
    "},{"location":"design/autoware-interfaces/ad-api/list/api/operation_mode/change_to_stop/#description","title":"Description","text":"

    Change the operation mode to stop. For details, see the operation mode.

    "},{"location":"design/autoware-interfaces/ad-api/list/api/operation_mode/change_to_stop/#request","title":"Request","text":"

    None

    "},{"location":"design/autoware-interfaces/ad-api/list/api/operation_mode/change_to_stop/#response","title":"Response","text":"Name Type Description status autoware_adapi_v1_msgs/msg/ResponseStatus response status"},{"location":"design/autoware-interfaces/ad-api/list/api/operation_mode/disable_autoware_control/","title":"/api/operation_mode/disable_autoware_control","text":""},{"location":"design/autoware-interfaces/ad-api/list/api/operation_mode/disable_autoware_control/#apioperation_modedisable_autoware_control","title":"/api/operation_mode/disable_autoware_control","text":""},{"location":"design/autoware-interfaces/ad-api/list/api/operation_mode/disable_autoware_control/#status","title":"Status","text":"
    • Latest Version: v1.0.0
    • Method: function call
    • Data Type: autoware_adapi_v1_msgs/srv/ChangeOperationMode
    "},{"location":"design/autoware-interfaces/ad-api/list/api/operation_mode/disable_autoware_control/#description","title":"Description","text":"

    Disable vehicle control by Autoware. For details, see the operation mode. This API fails if the vehicle does not support mode change by software.

    "},{"location":"design/autoware-interfaces/ad-api/list/api/operation_mode/disable_autoware_control/#request","title":"Request","text":"

    None

    "},{"location":"design/autoware-interfaces/ad-api/list/api/operation_mode/disable_autoware_control/#response","title":"Response","text":"Name Type Description status autoware_adapi_v1_msgs/msg/ResponseStatus response status"},{"location":"design/autoware-interfaces/ad-api/list/api/operation_mode/enable_autoware_control/","title":"/api/operation_mode/enable_autoware_control","text":""},{"location":"design/autoware-interfaces/ad-api/list/api/operation_mode/enable_autoware_control/#apioperation_modeenable_autoware_control","title":"/api/operation_mode/enable_autoware_control","text":""},{"location":"design/autoware-interfaces/ad-api/list/api/operation_mode/enable_autoware_control/#status","title":"Status","text":"
    • Latest Version: v1.0.0
    • Method: function call
    • Data Type: autoware_adapi_v1_msgs/srv/ChangeOperationMode
    "},{"location":"design/autoware-interfaces/ad-api/list/api/operation_mode/enable_autoware_control/#description","title":"Description","text":"

    Enable vehicle control by Autoware. For details, see the operation mode. This API fails if the vehicle does not support mode change by software.

    "},{"location":"design/autoware-interfaces/ad-api/list/api/operation_mode/enable_autoware_control/#request","title":"Request","text":"

    None

    "},{"location":"design/autoware-interfaces/ad-api/list/api/operation_mode/enable_autoware_control/#response","title":"Response","text":"Name Type Description status autoware_adapi_v1_msgs/msg/ResponseStatus response status"},{"location":"design/autoware-interfaces/ad-api/list/api/operation_mode/state/","title":"/api/operation_mode/state","text":""},{"location":"design/autoware-interfaces/ad-api/list/api/operation_mode/state/#apioperation_modestate","title":"/api/operation_mode/state","text":""},{"location":"design/autoware-interfaces/ad-api/list/api/operation_mode/state/#status","title":"Status","text":"
    • Latest Version: v1.0.0
    • Method: notification
    • Data Type: autoware_adapi_v1_msgs/msg/OperationModeState
    "},{"location":"design/autoware-interfaces/ad-api/list/api/operation_mode/state/#description","title":"Description","text":"

    Get the operation mode state. For details, see the operation mode.

    "},{"location":"design/autoware-interfaces/ad-api/list/api/operation_mode/state/#message","title":"Message","text":"Name Type Description mode uint8 The selected command for Autoware control. is_autoware_control_enabled bool True if vehicle control by Autoware is enabled. is_in_transition bool True if the operation mode is in transition. is_stop_mode_available bool True if the operation mode can be changed to stop. is_autonomous_mode_available bool True if the operation mode can be changed to autonomous. is_local_mode_available bool True if the operation mode can be changed to local. is_remote_mode_available bool True if the operation mode can be changed to remote."},{"location":"design/autoware-interfaces/ad-api/list/api/perception/objects/","title":"/api/perception/objects","text":""},{"location":"design/autoware-interfaces/ad-api/list/api/perception/objects/#apiperceptionobjects","title":"/api/perception/objects","text":""},{"location":"design/autoware-interfaces/ad-api/list/api/perception/objects/#status","title":"Status","text":"
    • Latest Version: not released
    • Method: realtime stream
    • Data Type: autoware_adapi_v1_msgs/msg/DynamicObjectArray
    "},{"location":"design/autoware-interfaces/ad-api/list/api/perception/objects/#description","title":"Description","text":"

    Get the recognized objects array with label, shape, current position and predicted path For details, see the perception.

    "},{"location":"design/autoware-interfaces/ad-api/list/api/perception/objects/#message","title":"Message","text":"Name Type Description objects.id unique_identifier_msgs/msg/UUID The UUID of each object objects.existence_probability float64 The probability of the object exits objects.classification autoware_adapi_v1_msgs/msg/ObjectClassification[] The type of the object recognized and the confidence level objects.kinematics autoware_adapi_v1_msgs/msg/DynamicObjectKinematics Consist of the object pose, twist, acceleration and the predicted_paths objects.shape shape_msgs/msg/SolidPrimitive escribe the shape of the object with dimension, and polygon"},{"location":"design/autoware-interfaces/ad-api/list/api/planning/steering_factors/","title":"/api/planning/steering_factors","text":""},{"location":"design/autoware-interfaces/ad-api/list/api/planning/steering_factors/#apiplanningsteering_factors","title":"/api/planning/steering_factors","text":""},{"location":"design/autoware-interfaces/ad-api/list/api/planning/steering_factors/#status","title":"Status","text":"
    • Latest Version: not released
    • Method: realtime stream
    • Data Type: autoware_adapi_v1_msgs/msg/SteeringFactorArray
    "},{"location":"design/autoware-interfaces/ad-api/list/api/planning/steering_factors/#description","title":"Description","text":"

    Get the steering factors, sorted in ascending order of distance. For details, see the planning factors.

    "},{"location":"design/autoware-interfaces/ad-api/list/api/planning/steering_factors/#message","title":"Message","text":"Name Type Description factors.pose geometry_msgs/msg/Pose[2] The base link pose related to the steering factor. factors.distance float32[2] The distance from the base link to the above pose. factors.direction uint16 The direction of the steering factor. factors.status uint16 The status of the steering factor. factors.behavior string The behavior type of the steering factor. factors.sequence string The sequence type of the steering factor. factors.detail string The additional information of the steering factor. factors.cooperation autoware_adapi_v1_msgs/msg/CooperationStatus[<=1] The cooperation status if the module supports."},{"location":"design/autoware-interfaces/ad-api/list/api/planning/velocity_factors/","title":"/api/planning/velocity_factors","text":""},{"location":"design/autoware-interfaces/ad-api/list/api/planning/velocity_factors/#apiplanningvelocity_factors","title":"/api/planning/velocity_factors","text":""},{"location":"design/autoware-interfaces/ad-api/list/api/planning/velocity_factors/#status","title":"Status","text":"
    • Latest Version: not released
    • Method: realtime stream
    • Data Type: autoware_adapi_v1_msgs/msg/VelocityFactorArray
    "},{"location":"design/autoware-interfaces/ad-api/list/api/planning/velocity_factors/#description","title":"Description","text":"

    Get the velocity factors, sorted in ascending order of distance. For details, see the planning factors.

    "},{"location":"design/autoware-interfaces/ad-api/list/api/planning/velocity_factors/#message","title":"Message","text":"Name Type Description factors.pose geometry_msgs/msg/Pose The base link pose related to the velocity factor. factors.distance float32 The distance from the base link to the above pose. factors.status uint16 The status of the velocity factor. factors.behavior string The behavior type of the velocity factor. factors.sequence string The sequence type of the velocity factor. factors.detail string The additional information of the velocity factor. factors.cooperation autoware_adapi_v1_msgs/msg/CooperationStatus[<=1] The cooperation status if the module supports."},{"location":"design/autoware-interfaces/ad-api/list/api/planning/cooperation/get_policies/","title":"/api/planning/cooperation/get_policies","text":""},{"location":"design/autoware-interfaces/ad-api/list/api/planning/cooperation/get_policies/#apiplanningcooperationget_policies","title":"/api/planning/cooperation/get_policies","text":""},{"location":"design/autoware-interfaces/ad-api/list/api/planning/cooperation/get_policies/#status","title":"Status","text":"
    • Latest Version: not released
    • Method: function call
    • Data Type: autoware_adapi_v1_msgs/srv/GetCooperationPolicies
    "},{"location":"design/autoware-interfaces/ad-api/list/api/planning/cooperation/get_policies/#description","title":"Description","text":"

    Get the default decision that is used instead when the operator's decision is undecided. For details, see the cooperation.

    "},{"location":"design/autoware-interfaces/ad-api/list/api/planning/cooperation/get_policies/#request","title":"Request","text":"

    None

    "},{"location":"design/autoware-interfaces/ad-api/list/api/planning/cooperation/get_policies/#response","title":"Response","text":"Name Type Description status autoware_adapi_v1_msgs/msg/ResponseStatus response status policies.behavior string The type of the target behavior. policies.sequence string The type of the target sequence. policies.policy uint8 The type of the cooperation policy."},{"location":"design/autoware-interfaces/ad-api/list/api/planning/cooperation/set_commands/","title":"/api/planning/cooperation/set_commands","text":""},{"location":"design/autoware-interfaces/ad-api/list/api/planning/cooperation/set_commands/#apiplanningcooperationset_commands","title":"/api/planning/cooperation/set_commands","text":""},{"location":"design/autoware-interfaces/ad-api/list/api/planning/cooperation/set_commands/#status","title":"Status","text":"
    • Latest Version: not released
    • Method: function call
    • Data Type: autoware_adapi_v1_msgs/srv/SetCooperationCommands
    "},{"location":"design/autoware-interfaces/ad-api/list/api/planning/cooperation/set_commands/#description","title":"Description","text":"

    Set the operator's decision for cooperation. For details, see the cooperation.

    "},{"location":"design/autoware-interfaces/ad-api/list/api/planning/cooperation/set_commands/#request","title":"Request","text":"Name Type Description commands.uuid unique_identifier_msgs/msg/UUID The ID in the cooperation status. commands.cooperator autoware_adapi_v1_msgs/msg/CooperationDecision The operator's decision."},{"location":"design/autoware-interfaces/ad-api/list/api/planning/cooperation/set_commands/#response","title":"Response","text":"Name Type Description status autoware_adapi_v1_msgs/msg/ResponseStatus response status"},{"location":"design/autoware-interfaces/ad-api/list/api/planning/cooperation/set_policies/","title":"/api/planning/cooperation/set_policies","text":""},{"location":"design/autoware-interfaces/ad-api/list/api/planning/cooperation/set_policies/#apiplanningcooperationset_policies","title":"/api/planning/cooperation/set_policies","text":""},{"location":"design/autoware-interfaces/ad-api/list/api/planning/cooperation/set_policies/#status","title":"Status","text":"
    • Latest Version: not released
    • Method: function call
    • Data Type: autoware_adapi_v1_msgs/srv/SetCooperationPolicies
    "},{"location":"design/autoware-interfaces/ad-api/list/api/planning/cooperation/set_policies/#description","title":"Description","text":"

    Set the default decision that is used instead when the operator's decision is undecided. For details, see the cooperation.

    "},{"location":"design/autoware-interfaces/ad-api/list/api/planning/cooperation/set_policies/#request","title":"Request","text":"Name Type Description policies.behavior string The type of the target behavior. policies.sequence string The type of the target sequence. policies.policy uint8 The type of the cooperation policy."},{"location":"design/autoware-interfaces/ad-api/list/api/planning/cooperation/set_policies/#response","title":"Response","text":"Name Type Description status autoware_adapi_v1_msgs/msg/ResponseStatus response status"},{"location":"design/autoware-interfaces/ad-api/list/api/remote/command/acceleration/","title":"/api/remote/command/acceleration","text":""},{"location":"design/autoware-interfaces/ad-api/list/api/remote/command/acceleration/#apiremotecommandacceleration","title":"/api/remote/command/acceleration","text":""},{"location":"design/autoware-interfaces/ad-api/list/api/remote/command/acceleration/#status","title":"Status","text":"
    • Latest Version: not released
    • Method: realtime stream
    • Data Type: autoware_adapi_v1_msgs/msg/AccelerationCommand
    "},{"location":"design/autoware-interfaces/ad-api/list/api/remote/command/acceleration/#description","title":"Description","text":"

    Sends acceleration command used in remote operation mode. To use this API, select the corresponding mode as described in manual control.

    "},{"location":"design/autoware-interfaces/ad-api/list/api/remote/command/acceleration/#message","title":"Message","text":"Name Type Description stamp builtin_interfaces/msg/Time Timestamp when this message was sent. acceleration float32 Target acceleration [m/s^2]."},{"location":"design/autoware-interfaces/ad-api/list/api/remote/command/gear/","title":"/api/remote/command/gear","text":""},{"location":"design/autoware-interfaces/ad-api/list/api/remote/command/gear/#apiremotecommandgear","title":"/api/remote/command/gear","text":""},{"location":"design/autoware-interfaces/ad-api/list/api/remote/command/gear/#status","title":"Status","text":"
    • Latest Version: not released
    • Method: notification
    • Data Type: autoware_adapi_v1_msgs/msg/GearCommand
    "},{"location":"design/autoware-interfaces/ad-api/list/api/remote/command/gear/#description","title":"Description","text":"

    Sends gear command used in remote operation mode. To use this API, select the corresponding mode as described in manual control.

    "},{"location":"design/autoware-interfaces/ad-api/list/api/remote/command/gear/#message","title":"Message","text":"Name Type Description stamp builtin_interfaces/msg/Time Timestamp when this message was sent. command autoware_adapi_v1_msgs/msg/Gear Target gear status."},{"location":"design/autoware-interfaces/ad-api/list/api/remote/command/hazard_lights/","title":"/api/remote/command/hazard_lights","text":""},{"location":"design/autoware-interfaces/ad-api/list/api/remote/command/hazard_lights/#apiremotecommandhazard_lights","title":"/api/remote/command/hazard_lights","text":""},{"location":"design/autoware-interfaces/ad-api/list/api/remote/command/hazard_lights/#status","title":"Status","text":"
    • Latest Version: not released
    • Method: notification
    • Data Type: autoware_adapi_v1_msgs/msg/HazardLightsCommand
    "},{"location":"design/autoware-interfaces/ad-api/list/api/remote/command/hazard_lights/#description","title":"Description","text":"

    Sends hazard lights command used in remote operation mode. To use this API, select the corresponding mode as described in manual control.

    "},{"location":"design/autoware-interfaces/ad-api/list/api/remote/command/hazard_lights/#message","title":"Message","text":"Name Type Description stamp builtin_interfaces/msg/Time Timestamp when this message was sent. command autoware_adapi_v1_msgs/msg/HazardLights Target hazard lights status."},{"location":"design/autoware-interfaces/ad-api/list/api/remote/command/pedal/","title":"/api/remote/command/pedal","text":""},{"location":"design/autoware-interfaces/ad-api/list/api/remote/command/pedal/#apiremotecommandpedal","title":"/api/remote/command/pedal","text":""},{"location":"design/autoware-interfaces/ad-api/list/api/remote/command/pedal/#status","title":"Status","text":"
    • Latest Version: not released
    • Method: realtime stream
    • Data Type: autoware_adapi_v1_msgs/msg/PedalCommand
    "},{"location":"design/autoware-interfaces/ad-api/list/api/remote/command/pedal/#description","title":"Description","text":"

    Sends pedal command used in remote operation mode. The pedal value is the ratio with the maximum pedal depression being 1.0. To use this API, select the corresponding mode as described in manual control.

    "},{"location":"design/autoware-interfaces/ad-api/list/api/remote/command/pedal/#message","title":"Message","text":"Name Type Description stamp builtin_interfaces/msg/Time Timestamp when this message was sent. accelerator float32 Target accelerator pedal ratio. brake float32 Target brake pedal ratio."},{"location":"design/autoware-interfaces/ad-api/list/api/remote/command/steering/","title":"/api/remote/command/steering","text":""},{"location":"design/autoware-interfaces/ad-api/list/api/remote/command/steering/#apiremotecommandsteering","title":"/api/remote/command/steering","text":""},{"location":"design/autoware-interfaces/ad-api/list/api/remote/command/steering/#status","title":"Status","text":"
    • Latest Version: not released
    • Method: realtime stream
    • Data Type: autoware_adapi_v1_msgs/msg/SteeringCommand
    "},{"location":"design/autoware-interfaces/ad-api/list/api/remote/command/steering/#description","title":"Description","text":"

    Send steering command to this API. To use this API, select the corresponding mode as described in manual control.

    "},{"location":"design/autoware-interfaces/ad-api/list/api/remote/command/steering/#message","title":"Message","text":"Name Type Description stamp builtin_interfaces/msg/Time Timestamp when this message was sent. steering_tire_angle float32 Target steering tire angle [rad]."},{"location":"design/autoware-interfaces/ad-api/list/api/remote/command/turn_indicators/","title":"/api/remote/command/turn_indicators","text":""},{"location":"design/autoware-interfaces/ad-api/list/api/remote/command/turn_indicators/#apiremotecommandturn_indicators","title":"/api/remote/command/turn_indicators","text":""},{"location":"design/autoware-interfaces/ad-api/list/api/remote/command/turn_indicators/#status","title":"Status","text":"
    • Latest Version: not released
    • Method: notification
    • Data Type: autoware_adapi_v1_msgs/msg/TurnIndicatorsCommand
    "},{"location":"design/autoware-interfaces/ad-api/list/api/remote/command/turn_indicators/#description","title":"Description","text":"

    Sends turn indicators command used in remote operation mode. To use this API, select the corresponding mode as described in manual control.

    "},{"location":"design/autoware-interfaces/ad-api/list/api/remote/command/turn_indicators/#message","title":"Message","text":"Name Type Description stamp builtin_interfaces/msg/Time Timestamp when this message was sent. command autoware_adapi_v1_msgs/msg/TurnIndicators Target turn indicators status."},{"location":"design/autoware-interfaces/ad-api/list/api/remote/control_mode/list/","title":"/api/remote/control_mode/list","text":""},{"location":"design/autoware-interfaces/ad-api/list/api/remote/control_mode/list/#apiremotecontrol_modelist","title":"/api/remote/control_mode/list","text":""},{"location":"design/autoware-interfaces/ad-api/list/api/remote/control_mode/list/#status","title":"Status","text":"
    • Latest Version: not released
    • Method: function call
    • Data Type: autoware_adapi_v1_msgs/srv/ListManualControlMode
    "},{"location":"design/autoware-interfaces/ad-api/list/api/remote/control_mode/list/#description","title":"Description","text":"

    List the available manual control modes as described in manual control. The disabled mode is not included in the available modes.

    "},{"location":"design/autoware-interfaces/ad-api/list/api/remote/control_mode/list/#request","title":"Request","text":"

    None

    "},{"location":"design/autoware-interfaces/ad-api/list/api/remote/control_mode/list/#response","title":"Response","text":"Name Type Description status autoware_adapi_v1_msgs/msg/ResponseStatus response status modes autoware_adapi_v1_msgs/msg/ManualControlMode[] List of available modes."},{"location":"design/autoware-interfaces/ad-api/list/api/remote/control_mode/select/","title":"/api/remote/control_mode/select","text":""},{"location":"design/autoware-interfaces/ad-api/list/api/remote/control_mode/select/#apiremotecontrol_modeselect","title":"/api/remote/control_mode/select","text":""},{"location":"design/autoware-interfaces/ad-api/list/api/remote/control_mode/select/#status","title":"Status","text":"
    • Latest Version: not released
    • Method: function call
    • Data Type: autoware_adapi_v1_msgs/srv/SelectManualControlMode
    "},{"location":"design/autoware-interfaces/ad-api/list/api/remote/control_mode/select/#description","title":"Description","text":"

    Selects the manual control mode as described in manual control. This API fails while operation mode is remote.

    "},{"location":"design/autoware-interfaces/ad-api/list/api/remote/control_mode/select/#request","title":"Request","text":"Name Type Description mode autoware_adapi_v1_msgs/msg/ManualControlMode The manual control mode to be used."},{"location":"design/autoware-interfaces/ad-api/list/api/remote/control_mode/select/#response","title":"Response","text":"Name Type Description status autoware_adapi_v1_msgs/msg/ResponseStatus response status"},{"location":"design/autoware-interfaces/ad-api/list/api/remote/control_mode/status/","title":"/api/remote/control_mode/status","text":""},{"location":"design/autoware-interfaces/ad-api/list/api/remote/control_mode/status/#apiremotecontrol_modestatus","title":"/api/remote/control_mode/status","text":""},{"location":"design/autoware-interfaces/ad-api/list/api/remote/control_mode/status/#status","title":"Status","text":"
    • Latest Version: not released
    • Method: notification
    • Data Type: autoware_adapi_v1_msgs/msg/ManualControlModeStatus
    "},{"location":"design/autoware-interfaces/ad-api/list/api/remote/control_mode/status/#description","title":"Description","text":"

    Get the current manual operation mode.

    "},{"location":"design/autoware-interfaces/ad-api/list/api/remote/control_mode/status/#message","title":"Message","text":"Name Type Description stamp builtin_interfaces/msg/Time Timestamp when this message was sent."},{"location":"design/autoware-interfaces/ad-api/list/api/remote/operator/status/","title":"/api/remote/operator/status","text":""},{"location":"design/autoware-interfaces/ad-api/list/api/remote/operator/status/#apiremoteoperatorstatus","title":"/api/remote/operator/status","text":""},{"location":"design/autoware-interfaces/ad-api/list/api/remote/operator/status/#status","title":"Status","text":"
    • Latest Version: not released
    • Method: realtime stream
    • Data Type: autoware_adapi_v1_msgs/msg/ManualOperatorStatus
    "},{"location":"design/autoware-interfaces/ad-api/list/api/remote/operator/status/#description","title":"Description","text":"

    The application needs to determine whether the operator is able to drive and send that information via this API. For details, see the manual control.

    "},{"location":"design/autoware-interfaces/ad-api/list/api/remote/operator/status/#message","title":"Message","text":"Name Type Description stamp builtin_interfaces/msg/Time Timestamp when this message was sent. ready bool Whether the operator is able to continue driving."},{"location":"design/autoware-interfaces/ad-api/list/api/routing/change_route/","title":"/api/routing/change_route","text":""},{"location":"design/autoware-interfaces/ad-api/list/api/routing/change_route/#apiroutingchange_route","title":"/api/routing/change_route","text":""},{"location":"design/autoware-interfaces/ad-api/list/api/routing/change_route/#status","title":"Status","text":"
    • Latest Version: v1.5.0
    • Method: function call
    • Data Type: autoware_adapi_v1_msgs/srv/SetRoute
    "},{"location":"design/autoware-interfaces/ad-api/list/api/routing/change_route/#description","title":"Description","text":"

    Same as /api/routing/set_route, but change the route while driving. This API only accepts the route when the route state is SET. In any other state, set the route first or wait for the route change to complete.

    "},{"location":"design/autoware-interfaces/ad-api/list/api/routing/change_route/#request","title":"Request","text":"Name Type Description header std_msgs/msg/Header header for pose transformation goal geometry_msgs/msg/Pose goal pose segments autoware_adapi_v1_msgs/msg/RouteSegment[] waypoint segments in lanelet format"},{"location":"design/autoware-interfaces/ad-api/list/api/routing/change_route/#response","title":"Response","text":"Name Type Description status autoware_adapi_v1_msgs/msg/ResponseStatus response status"},{"location":"design/autoware-interfaces/ad-api/list/api/routing/change_route_points/","title":"/api/routing/change_route_points","text":""},{"location":"design/autoware-interfaces/ad-api/list/api/routing/change_route_points/#apiroutingchange_route_points","title":"/api/routing/change_route_points","text":""},{"location":"design/autoware-interfaces/ad-api/list/api/routing/change_route_points/#status","title":"Status","text":"
    • Latest Version: v1.5.0
    • Method: function call
    • Data Type: autoware_adapi_v1_msgs/srv/SetRoutePoints
    "},{"location":"design/autoware-interfaces/ad-api/list/api/routing/change_route_points/#description","title":"Description","text":"

    Same as /api/routing/set_route_points, but change the route while driving. This API only accepts the route when the route state is SET. In any other state, set the route first or wait for the route change to complete.

    "},{"location":"design/autoware-interfaces/ad-api/list/api/routing/change_route_points/#request","title":"Request","text":"Name Type Description header std_msgs/msg/Header header for pose transformation goal geometry_msgs/msg/Pose goal pose waypoints geometry_msgs/msg/Pose[] waypoint poses"},{"location":"design/autoware-interfaces/ad-api/list/api/routing/change_route_points/#response","title":"Response","text":"Name Type Description status autoware_adapi_v1_msgs/msg/ResponseStatus response status"},{"location":"design/autoware-interfaces/ad-api/list/api/routing/clear_route/","title":"/api/routing/clear_route","text":""},{"location":"design/autoware-interfaces/ad-api/list/api/routing/clear_route/#apiroutingclear_route","title":"/api/routing/clear_route","text":""},{"location":"design/autoware-interfaces/ad-api/list/api/routing/clear_route/#status","title":"Status","text":"
    • Latest Version: v1.0.0
    • Method: function call
    • Data Type: autoware_adapi_v1_msgs/srv/ClearRoute
    "},{"location":"design/autoware-interfaces/ad-api/list/api/routing/clear_route/#description","title":"Description","text":"

    Clear the route. This API fails when the vehicle is using the route.

    "},{"location":"design/autoware-interfaces/ad-api/list/api/routing/clear_route/#request","title":"Request","text":"

    None

    "},{"location":"design/autoware-interfaces/ad-api/list/api/routing/clear_route/#response","title":"Response","text":"Name Type Description status autoware_adapi_v1_msgs/msg/ResponseStatus response status"},{"location":"design/autoware-interfaces/ad-api/list/api/routing/route/","title":"/api/routing/route","text":""},{"location":"design/autoware-interfaces/ad-api/list/api/routing/route/#apiroutingroute","title":"/api/routing/route","text":""},{"location":"design/autoware-interfaces/ad-api/list/api/routing/route/#status","title":"Status","text":"
    • Latest Version: v1.0.0
    • Method: notification
    • Data Type: autoware_adapi_v1_msgs/msg/Route
    "},{"location":"design/autoware-interfaces/ad-api/list/api/routing/route/#description","title":"Description","text":"

    Get the route with the waypoint segments in lanelet format. It is empty if route is not set.

    "},{"location":"design/autoware-interfaces/ad-api/list/api/routing/route/#message","title":"Message","text":"Name Type Description header std_msgs/msg/Header header for pose transformation data autoware_adapi_v1_msgs/msg/RouteData[<=1] The route in lanelet format"},{"location":"design/autoware-interfaces/ad-api/list/api/routing/set_route/","title":"/api/routing/set_route","text":""},{"location":"design/autoware-interfaces/ad-api/list/api/routing/set_route/#apiroutingset_route","title":"/api/routing/set_route","text":""},{"location":"design/autoware-interfaces/ad-api/list/api/routing/set_route/#status","title":"Status","text":"
    • Latest Version: v1.0.0
    • Method: function call
    • Data Type: autoware_adapi_v1_msgs/srv/SetRoute
    "},{"location":"design/autoware-interfaces/ad-api/list/api/routing/set_route/#description","title":"Description","text":"

    Set the route with the waypoint segments in lanelet format. If start pose is not specified, the current pose will be used. This API only accepts the route when the route state is UNSET. In any other state, clear the route first.

    "},{"location":"design/autoware-interfaces/ad-api/list/api/routing/set_route/#request","title":"Request","text":"Name Type Description header std_msgs/msg/Header header for pose transformation goal geometry_msgs/msg/Pose goal pose segments autoware_adapi_v1_msgs/msg/RouteSegment[] waypoint segments in lanelet format"},{"location":"design/autoware-interfaces/ad-api/list/api/routing/set_route/#response","title":"Response","text":"Name Type Description status autoware_adapi_v1_msgs/msg/ResponseStatus response status"},{"location":"design/autoware-interfaces/ad-api/list/api/routing/set_route_points/","title":"/api/routing/set_route_points","text":""},{"location":"design/autoware-interfaces/ad-api/list/api/routing/set_route_points/#apiroutingset_route_points","title":"/api/routing/set_route_points","text":""},{"location":"design/autoware-interfaces/ad-api/list/api/routing/set_route_points/#status","title":"Status","text":"
    • Latest Version: v1.0.0
    • Method: function call
    • Data Type: autoware_adapi_v1_msgs/srv/SetRoutePoints
    "},{"location":"design/autoware-interfaces/ad-api/list/api/routing/set_route_points/#description","title":"Description","text":"

    Set the route with the waypoint poses. If start pose is not specified, the current pose will be used. This API only accepts the route when the route state is UNSET. In any other state, clear the route first.

    "},{"location":"design/autoware-interfaces/ad-api/list/api/routing/set_route_points/#request","title":"Request","text":"Name Type Description header std_msgs/msg/Header header for pose transformation goal geometry_msgs/msg/Pose goal pose waypoints geometry_msgs/msg/Pose[] waypoint poses"},{"location":"design/autoware-interfaces/ad-api/list/api/routing/set_route_points/#response","title":"Response","text":"Name Type Description status autoware_adapi_v1_msgs/msg/ResponseStatus response status"},{"location":"design/autoware-interfaces/ad-api/list/api/routing/state/","title":"/api/routing/state","text":""},{"location":"design/autoware-interfaces/ad-api/list/api/routing/state/#apiroutingstate","title":"/api/routing/state","text":""},{"location":"design/autoware-interfaces/ad-api/list/api/routing/state/#status","title":"Status","text":"
    • Latest Version: v1.0.0
    • Method: notification
    • Data Type: autoware_adapi_v1_msgs/msg/RouteState
    "},{"location":"design/autoware-interfaces/ad-api/list/api/routing/state/#description","title":"Description","text":"

    Get the route state. For details, see the routing.

    "},{"location":"design/autoware-interfaces/ad-api/list/api/routing/state/#message","title":"Message","text":"Name Type Description state uint16 A value of the route state."},{"location":"design/autoware-interfaces/ad-api/list/api/system/heartbeat/","title":"/api/system/heartbeat","text":""},{"location":"design/autoware-interfaces/ad-api/list/api/system/heartbeat/#apisystemheartbeat","title":"/api/system/heartbeat","text":""},{"location":"design/autoware-interfaces/ad-api/list/api/system/heartbeat/#status","title":"Status","text":"
    • Latest Version: v1.3.0
    • Method: realtime stream
    • Data Type: autoware_adapi_v1_msgs/msg/Heartbeat
    "},{"location":"design/autoware-interfaces/ad-api/list/api/system/heartbeat/#description","title":"Description","text":"

    The heartbeat frequency is 10 Hz.

    "},{"location":"design/autoware-interfaces/ad-api/list/api/system/heartbeat/#message","title":"Message","text":"Name Type Description stamp builtin_interfaces/msg/Time Timestamp in Autoware for delay checking. seq uint16 Sequence number for order verification, wraps at 65535."},{"location":"design/autoware-interfaces/ad-api/list/api/system/diagnostics/status/","title":"/api/system/diagnostics/status","text":""},{"location":"design/autoware-interfaces/ad-api/list/api/system/diagnostics/status/#apisystemdiagnosticsstatus","title":"/api/system/diagnostics/status","text":""},{"location":"design/autoware-interfaces/ad-api/list/api/system/diagnostics/status/#status","title":"Status","text":"
    • Latest Version: v1.3.0
    • Method: realtime stream
    • Data Type: autoware_adapi_v1_msgs/msg/DiagGraphStatus
    "},{"location":"design/autoware-interfaces/ad-api/list/api/system/diagnostics/status/#description","title":"Description","text":"

    This is the dynamic part of the diagnostics. If static data is published with new ID, ignore dynamic data with old ID. See diagnostics for details.

    "},{"location":"design/autoware-interfaces/ad-api/list/api/system/diagnostics/status/#message","title":"Message","text":"Name Type Description stamp builtin_interfaces/msg/Time Timestamp when this message was sent id string ID to check correspondence between struct and status. nodes autoware_adapi_v1_msgs/msg/DiagNodeStatus[] Dynamic data for nodes in diagnostic graph."},{"location":"design/autoware-interfaces/ad-api/list/api/system/diagnostics/struct/","title":"/api/system/diagnostics/struct","text":""},{"location":"design/autoware-interfaces/ad-api/list/api/system/diagnostics/struct/#apisystemdiagnosticsstruct","title":"/api/system/diagnostics/struct","text":""},{"location":"design/autoware-interfaces/ad-api/list/api/system/diagnostics/struct/#status","title":"Status","text":"
    • Latest Version: v1.3.0
    • Method: notification
    • Data Type: autoware_adapi_v1_msgs/msg/DiagGraphStruct
    "},{"location":"design/autoware-interfaces/ad-api/list/api/system/diagnostics/struct/#description","title":"Description","text":"

    This is the static part of the diagnostics. If static data is published with new ID, ignore dynamic data with old ID. See diagnostics for details.

    "},{"location":"design/autoware-interfaces/ad-api/list/api/system/diagnostics/struct/#message","title":"Message","text":"Name Type Description stamp builtin_interfaces/msg/Time Timestamp when this message was sent. id string ID to check correspondence between struct and status. nodes autoware_adapi_v1_msgs/msg/DiagNodeStruct[] Static data for nodes in diagnostic graph. links autoware_adapi_v1_msgs/msg/DiagLinkStruct[] Static data for links in diagnostic graph."},{"location":"design/autoware-interfaces/ad-api/list/api/vehicle/dimensions/","title":"/api/vehicle/dimensions","text":""},{"location":"design/autoware-interfaces/ad-api/list/api/vehicle/dimensions/#apivehicledimensions","title":"/api/vehicle/dimensions","text":""},{"location":"design/autoware-interfaces/ad-api/list/api/vehicle/dimensions/#status","title":"Status","text":"
    • Latest Version: v1.1.0
    • Method: function call
    • Data Type: autoware_adapi_v1_msgs/srv/GetVehicleDimensions
    "},{"location":"design/autoware-interfaces/ad-api/list/api/vehicle/dimensions/#description","title":"Description","text":"

    Get the vehicle dimensions. See here for the definition of each value.

    "},{"location":"design/autoware-interfaces/ad-api/list/api/vehicle/dimensions/#request","title":"Request","text":"

    None

    "},{"location":"design/autoware-interfaces/ad-api/list/api/vehicle/dimensions/#response","title":"Response","text":"Name Type Description status autoware_adapi_v1_msgs/msg/ResponseStatus response status dimensions autoware_adapi_v1_msgs/msg/VehicleDimensions vehicle dimensions"},{"location":"design/autoware-interfaces/ad-api/list/api/vehicle/kinematics/","title":"/api/vehicle/kinematics","text":""},{"location":"design/autoware-interfaces/ad-api/list/api/vehicle/kinematics/#apivehiclekinematics","title":"/api/vehicle/kinematics","text":""},{"location":"design/autoware-interfaces/ad-api/list/api/vehicle/kinematics/#status","title":"Status","text":"
    • Latest Version: v1.1.0
    • Method: realtime stream
    • Data Type: autoware_adapi_v1_msgs/msg/VehicleKinematics
    "},{"location":"design/autoware-interfaces/ad-api/list/api/vehicle/kinematics/#description","title":"Description","text":"

    Publish vehicle kinematics.

    "},{"location":"design/autoware-interfaces/ad-api/list/api/vehicle/kinematics/#message","title":"Message","text":"Name Type Description geographic_pose geographic_msgs/msg/GeoPointStamped The longitude and latitude of the vehicle. If the map uses local coordinates, it will not be available. pose geometry_msgs/msg/PoseWithCovarianceStamped The pose with covariance from the base link. twist geometry_msgs/msg/TwistWithCovarianceStamped Vehicle current twist with covariance. accel geometry_msgs/msg/AccelWithCovarianceStamped Vehicle current acceleration with covariance."},{"location":"design/autoware-interfaces/ad-api/list/api/vehicle/status/","title":"/api/vehicle/status","text":""},{"location":"design/autoware-interfaces/ad-api/list/api/vehicle/status/#apivehiclestatus","title":"/api/vehicle/status","text":""},{"location":"design/autoware-interfaces/ad-api/list/api/vehicle/status/#status","title":"Status","text":"
    • Latest Version: v1.4.0
    • Method: realtime stream
    • Data Type: autoware_adapi_v1_msgs/msg/VehicleStatus
    "},{"location":"design/autoware-interfaces/ad-api/list/api/vehicle/status/#description","title":"Description","text":"

    Publish vehicle state information.

    "},{"location":"design/autoware-interfaces/ad-api/list/api/vehicle/status/#message","title":"Message","text":"Name Type Description gear autoware_adapi_v1_msgs/msg/Gear Gear status. turn_indicators autoware_adapi_v1_msgs/msg/TurnIndicators Turn indicators status, only either left or right will be enabled. hazard_lights autoware_adapi_v1_msgs/msg/HazardLights Hazard lights status. steering_tire_angle float64 Vehicle current tire angle in radian."},{"location":"design/autoware-interfaces/ad-api/list/api/vehicle/doors/command/","title":"/api/vehicle/doors/command","text":""},{"location":"design/autoware-interfaces/ad-api/list/api/vehicle/doors/command/#apivehicledoorscommand","title":"/api/vehicle/doors/command","text":""},{"location":"design/autoware-interfaces/ad-api/list/api/vehicle/doors/command/#status","title":"Status","text":"
    • Latest Version: v1.2.0
    • Method: function call
    • Data Type: autoware_adapi_v1_msgs/srv/SetDoorCommand
    "},{"location":"design/autoware-interfaces/ad-api/list/api/vehicle/doors/command/#description","title":"Description","text":"

    Set the door command. This API is only available if the vehicle supports software door control. This API fails if the doors cannot be opened or closed safely.

    "},{"location":"design/autoware-interfaces/ad-api/list/api/vehicle/doors/command/#request","title":"Request","text":"Name Type Description doors.index uint32 The index of the target door. doors.command uint8 The command for the target door."},{"location":"design/autoware-interfaces/ad-api/list/api/vehicle/doors/command/#response","title":"Response","text":"Name Type Description status autoware_adapi_v1_msgs/msg/ResponseStatus response status"},{"location":"design/autoware-interfaces/ad-api/list/api/vehicle/doors/layout/","title":"/api/vehicle/doors/layout","text":""},{"location":"design/autoware-interfaces/ad-api/list/api/vehicle/doors/layout/#apivehicledoorslayout","title":"/api/vehicle/doors/layout","text":""},{"location":"design/autoware-interfaces/ad-api/list/api/vehicle/doors/layout/#status","title":"Status","text":"
    • Latest Version: v1.2.0
    • Method: function call
    • Data Type: autoware_adapi_v1_msgs/srv/GetDoorLayout
    "},{"location":"design/autoware-interfaces/ad-api/list/api/vehicle/doors/layout/#description","title":"Description","text":"

    Get the door layout. It is an array of roles and descriptions for each door.

    "},{"location":"design/autoware-interfaces/ad-api/list/api/vehicle/doors/layout/#request","title":"Request","text":"

    None

    "},{"location":"design/autoware-interfaces/ad-api/list/api/vehicle/doors/layout/#response","title":"Response","text":"Name Type Description status autoware_adapi_v1_msgs/msg/ResponseStatus response status doors.roles uint8[] The roles of the door in the service the vehicle provides. doors.description string The description of the door for display in the interface."},{"location":"design/autoware-interfaces/ad-api/list/api/vehicle/doors/status/","title":"/api/vehicle/doors/status","text":""},{"location":"design/autoware-interfaces/ad-api/list/api/vehicle/doors/status/#apivehicledoorsstatus","title":"/api/vehicle/doors/status","text":""},{"location":"design/autoware-interfaces/ad-api/list/api/vehicle/doors/status/#status","title":"Status","text":"
    • Latest Version: v1.2.0
    • Method: notification
    • Data Type: autoware_adapi_v1_msgs/msg/DoorStatusArray
    "},{"location":"design/autoware-interfaces/ad-api/list/api/vehicle/doors/status/#description","title":"Description","text":"

    The status of each door such as opened or closed.

    "},{"location":"design/autoware-interfaces/ad-api/list/api/vehicle/doors/status/#message","title":"Message","text":"Name Type Description doors.status uint8 current door status"},{"location":"design/autoware-interfaces/ad-api/stories/bus-service/","title":"User story of bus service","text":""},{"location":"design/autoware-interfaces/ad-api/stories/bus-service/#user-story-of-bus-service","title":"User story of bus service","text":""},{"location":"design/autoware-interfaces/ad-api/stories/bus-service/#overview","title":"Overview","text":"

    This user story is a bus service that goes around the designated stops.

    "},{"location":"design/autoware-interfaces/ad-api/stories/bus-service/#scenario","title":"Scenario","text":"Step Operation Use Case 1 Startup the autonomous driving system. Launch and terminate 2 Drive the vehicle from the garage to the waiting position. Change the operation mode 3 Enable autonomous control. Change the operation mode 4 Drive the vehicle to the next bus stop. Drive to the designated position 5 Get on and off the vehicle. Get on and get off 6 Return to step 4 unless it's the last bus stop. 7 Drive the vehicle to the waiting position. Drive to the designated position 8 Drive the vehicle from the waiting position to the garage. Change the operation mode 9 Shutdown the autonomous driving system. Launch and terminate"},{"location":"design/autoware-interfaces/ad-api/stories/taxi-service/","title":"User story of bus service","text":""},{"location":"design/autoware-interfaces/ad-api/stories/taxi-service/#user-story-of-bus-service","title":"User story of bus service","text":""},{"location":"design/autoware-interfaces/ad-api/stories/taxi-service/#overview","title":"Overview","text":"

    This user story is a taxi service that picks up passengers and drives them to their destination.

    "},{"location":"design/autoware-interfaces/ad-api/stories/taxi-service/#scenario","title":"Scenario","text":"Step Operation Use Case 1 Startup the autonomous driving system. Launch and terminate 2 Drive the vehicle from the garage to the waiting position. Change the operation mode 3 Enable autonomous control. Change the operation mode 4 Drive the vehicle to the position to pick up. Drive to the designated position 5 Get on the vehicle. Get on and get off 6 Drive the vehicle to the destination. Drive to the designated position 7 Get off the vehicle. Get on and get off 8 Drive the vehicle to the waiting position. Drive to the designated position 9 Return to step 4 if there is another request. 10 Drive the vehicle from the waiting position to the garage. Change the operation mode 11 Shutdown the autonomous driving system. Launch and terminate"},{"location":"design/autoware-interfaces/ad-api/types/","title":"Types of Autoware AD API","text":""},{"location":"design/autoware-interfaces/ad-api/types/#types-of-autoware-ad-api","title":"Types of Autoware AD API","text":"
    • autoware_adapi_v1_msgs/msg/AccelerationCommand
    • autoware_adapi_v1_msgs/msg/CooperationCommand
    • autoware_adapi_v1_msgs/msg/CooperationDecision
    • autoware_adapi_v1_msgs/msg/CooperationPolicy
    • autoware_adapi_v1_msgs/msg/CooperationStatus
    • autoware_adapi_v1_msgs/msg/DiagGraphStatus
    • autoware_adapi_v1_msgs/msg/DiagGraphStruct
    • autoware_adapi_v1_msgs/msg/DiagLinkStruct
    • autoware_adapi_v1_msgs/msg/DiagNodeStatus
    • autoware_adapi_v1_msgs/msg/DiagNodeStruct
    • autoware_adapi_v1_msgs/msg/DoorCommand
    • autoware_adapi_v1_msgs/msg/DoorLayout
    • autoware_adapi_v1_msgs/msg/DoorStatus
    • autoware_adapi_v1_msgs/msg/DoorStatusArray
    • autoware_adapi_v1_msgs/msg/DynamicObject
    • autoware_adapi_v1_msgs/msg/DynamicObjectArray
    • autoware_adapi_v1_msgs/msg/DynamicObjectKinematics
    • autoware_adapi_v1_msgs/msg/DynamicObjectPath
    • autoware_adapi_v1_msgs/msg/Gear
    • autoware_adapi_v1_msgs/msg/GearCommand
    • autoware_adapi_v1_msgs/msg/HazardLights
    • autoware_adapi_v1_msgs/msg/HazardLightsCommand
    • autoware_adapi_v1_msgs/msg/Heartbeat
    • autoware_adapi_v1_msgs/msg/LocalizationInitializationState
    • autoware_adapi_v1_msgs/msg/ManualControlMode
    • autoware_adapi_v1_msgs/msg/ManualControlModeStatus
    • autoware_adapi_v1_msgs/msg/ManualOperatorStatus
    • autoware_adapi_v1_msgs/msg/MotionState
    • autoware_adapi_v1_msgs/msg/MrmState
    • autoware_adapi_v1_msgs/msg/ObjectClassification
    • autoware_adapi_v1_msgs/msg/OperationModeState
    • autoware_adapi_v1_msgs/msg/PedalCommand
    • autoware_adapi_v1_msgs/msg/ResponseStatus
    • autoware_adapi_v1_msgs/msg/Route
    • autoware_adapi_v1_msgs/msg/RouteData
    • autoware_adapi_v1_msgs/msg/RouteOption
    • autoware_adapi_v1_msgs/msg/RoutePrimitive
    • autoware_adapi_v1_msgs/msg/RouteSegment
    • autoware_adapi_v1_msgs/msg/RouteState
    • autoware_adapi_v1_msgs/msg/SteeringCommand
    • autoware_adapi_v1_msgs/msg/SteeringFactor
    • autoware_adapi_v1_msgs/msg/SteeringFactorArray
    • autoware_adapi_v1_msgs/msg/TurnIndicators
    • autoware_adapi_v1_msgs/msg/TurnIndicatorsCommand
    • autoware_adapi_v1_msgs/msg/VehicleDimensions
    • autoware_adapi_v1_msgs/msg/VehicleKinematics
    • autoware_adapi_v1_msgs/msg/VehicleStatus
    • autoware_adapi_v1_msgs/msg/VelocityFactor
    • autoware_adapi_v1_msgs/msg/VelocityFactorArray
    • autoware_adapi_v1_msgs/srv/AcceptStart
    • autoware_adapi_v1_msgs/srv/ChangeOperationMode
    • autoware_adapi_v1_msgs/srv/ClearRoute
    • autoware_adapi_v1_msgs/srv/GetCooperationPolicies
    • autoware_adapi_v1_msgs/srv/GetDoorLayout
    • autoware_adapi_v1_msgs/srv/GetVehicleDimensions
    • autoware_adapi_v1_msgs/srv/InitializeLocalization
    • autoware_adapi_v1_msgs/srv/ListManualControlMode
    • autoware_adapi_v1_msgs/srv/SelectManualControlMode
    • autoware_adapi_v1_msgs/srv/SetCooperationCommands
    • autoware_adapi_v1_msgs/srv/SetCooperationPolicies
    • autoware_adapi_v1_msgs/srv/SetDoorCommand
    • autoware_adapi_v1_msgs/srv/SetRoute
    • autoware_adapi_v1_msgs/srv/SetRoutePoints
    • autoware_adapi_version_msgs/srv/InterfaceVersion
    "},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/AccelerationCommand/","title":"autoware_adapi_v1_msgs/msg/AccelerationCommand","text":""},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/AccelerationCommand/#autoware_adapi_v1_msgsmsgaccelerationcommand","title":"autoware_adapi_v1_msgs/msg/AccelerationCommand","text":""},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/AccelerationCommand/#definition","title":"Definition","text":"
    builtin_interfaces/Time stamp\nfloat32 acceleration\n
    "},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/AccelerationCommand/#this-type-uses","title":"This type uses","text":"
    • None
    "},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/AccelerationCommand/#this-type-is-used-by","title":"This type is used by","text":"
    • None
    "},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/CooperationCommand/","title":"autoware_adapi_v1_msgs/msg/CooperationCommand","text":""},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/CooperationCommand/#autoware_adapi_v1_msgsmsgcooperationcommand","title":"autoware_adapi_v1_msgs/msg/CooperationCommand","text":""},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/CooperationCommand/#definition","title":"Definition","text":"
    unique_identifier_msgs/UUID uuid\nautoware_adapi_v1_msgs/CooperationDecision cooperator\n
    "},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/CooperationCommand/#this-type-uses","title":"This type uses","text":"
    • autoware_adapi_v1_msgs/msg/CooperationDecision
    "},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/CooperationCommand/#this-type-is-used-by","title":"This type is used by","text":"
    • autoware_adapi_v1_msgs/srv/SetCooperationCommands
    "},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/CooperationDecision/","title":"autoware_adapi_v1_msgs/msg/CooperationDecision","text":""},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/CooperationDecision/#autoware_adapi_v1_msgsmsgcooperationdecision","title":"autoware_adapi_v1_msgs/msg/CooperationDecision","text":""},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/CooperationDecision/#definition","title":"Definition","text":"
    uint8 UNKNOWN = 0\nuint8 DEACTIVATE = 1\nuint8 ACTIVATE = 2\nuint8 AUTONOMOUS = 3\nuint8 UNDECIDED = 4\n\nuint8 decision\n
    "},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/CooperationDecision/#this-type-uses","title":"This type uses","text":"
    • None
    "},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/CooperationDecision/#this-type-is-used-by","title":"This type is used by","text":"
    • autoware_adapi_v1_msgs/msg/CooperationCommand
    • autoware_adapi_v1_msgs/msg/CooperationStatus
    "},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/CooperationPolicy/","title":"autoware_adapi_v1_msgs/msg/CooperationPolicy","text":""},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/CooperationPolicy/#autoware_adapi_v1_msgsmsgcooperationpolicy","title":"autoware_adapi_v1_msgs/msg/CooperationPolicy","text":""},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/CooperationPolicy/#definition","title":"Definition","text":"
    uint8 OPTIONAL = 1\nuint8 REQUIRED = 2\n\nstring behavior\nstring sequence\nuint8 policy\n
    "},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/CooperationPolicy/#this-type-uses","title":"This type uses","text":"
    • None
    "},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/CooperationPolicy/#this-type-is-used-by","title":"This type is used by","text":"
    • autoware_adapi_v1_msgs/srv/GetCooperationPolicies
    • autoware_adapi_v1_msgs/srv/SetCooperationPolicies
    "},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/CooperationStatus/","title":"autoware_adapi_v1_msgs/msg/CooperationStatus","text":""},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/CooperationStatus/#autoware_adapi_v1_msgsmsgcooperationstatus","title":"autoware_adapi_v1_msgs/msg/CooperationStatus","text":""},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/CooperationStatus/#definition","title":"Definition","text":"
    unique_identifier_msgs/UUID uuid\nautoware_adapi_v1_msgs/CooperationDecision autonomous\nautoware_adapi_v1_msgs/CooperationDecision cooperator\nbool cancellable\n
    "},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/CooperationStatus/#this-type-uses","title":"This type uses","text":"
    • autoware_adapi_v1_msgs/msg/CooperationDecision
    "},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/CooperationStatus/#this-type-is-used-by","title":"This type is used by","text":"
    • autoware_adapi_v1_msgs/msg/SteeringFactor
    • autoware_adapi_v1_msgs/msg/VelocityFactor
    "},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/DiagGraphStatus/","title":"autoware_adapi_v1_msgs/msg/DiagGraphStatus","text":""},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/DiagGraphStatus/#autoware_adapi_v1_msgsmsgdiaggraphstatus","title":"autoware_adapi_v1_msgs/msg/DiagGraphStatus","text":""},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/DiagGraphStatus/#definition","title":"Definition","text":"
    builtin_interfaces/Time stamp\nstring id\nautoware_adapi_v1_msgs/DiagNodeStatus[] nodes\n
    "},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/DiagGraphStatus/#this-type-uses","title":"This type uses","text":"
    • autoware_adapi_v1_msgs/msg/DiagNodeStatus
    "},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/DiagGraphStatus/#this-type-is-used-by","title":"This type is used by","text":"
    • None
    "},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/DiagGraphStruct/","title":"autoware_adapi_v1_msgs/msg/DiagGraphStruct","text":""},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/DiagGraphStruct/#autoware_adapi_v1_msgsmsgdiaggraphstruct","title":"autoware_adapi_v1_msgs/msg/DiagGraphStruct","text":""},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/DiagGraphStruct/#definition","title":"Definition","text":"
    builtin_interfaces/Time stamp\nstring id\nautoware_adapi_v1_msgs/DiagNodeStruct[] nodes\nautoware_adapi_v1_msgs/DiagLinkStruct[] links\n
    "},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/DiagGraphStruct/#this-type-uses","title":"This type uses","text":"
    • autoware_adapi_v1_msgs/msg/DiagLinkStruct
    • autoware_adapi_v1_msgs/msg/DiagNodeStruct
    "},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/DiagGraphStruct/#this-type-is-used-by","title":"This type is used by","text":"
    • None
    "},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/DiagLinkStruct/","title":"autoware_adapi_v1_msgs/msg/DiagLinkStruct","text":""},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/DiagLinkStruct/#autoware_adapi_v1_msgsmsgdiaglinkstruct","title":"autoware_adapi_v1_msgs/msg/DiagLinkStruct","text":""},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/DiagLinkStruct/#definition","title":"Definition","text":"
    # The index of nodes in the graph struct message.\nuint32 parent\nuint32 child\n
    "},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/DiagLinkStruct/#this-type-uses","title":"This type uses","text":"
    • None
    "},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/DiagLinkStruct/#this-type-is-used-by","title":"This type is used by","text":"
    • autoware_adapi_v1_msgs/msg/DiagGraphStruct
    "},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/DiagNodeStatus/","title":"autoware_adapi_v1_msgs/msg/DiagNodeStatus","text":""},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/DiagNodeStatus/#autoware_adapi_v1_msgsmsgdiagnodestatus","title":"autoware_adapi_v1_msgs/msg/DiagNodeStatus","text":""},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/DiagNodeStatus/#definition","title":"Definition","text":"
    # The level of diagnostic_msgs/msg/DiagnosticStatus.\nbyte level\n
    "},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/DiagNodeStatus/#this-type-uses","title":"This type uses","text":"
    • None
    "},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/DiagNodeStatus/#this-type-is-used-by","title":"This type is used by","text":"
    • autoware_adapi_v1_msgs/msg/DiagGraphStatus
    "},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/DiagNodeStruct/","title":"autoware_adapi_v1_msgs/msg/DiagNodeStruct","text":""},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/DiagNodeStruct/#autoware_adapi_v1_msgsmsgdiagnodestruct","title":"autoware_adapi_v1_msgs/msg/DiagNodeStruct","text":""},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/DiagNodeStruct/#definition","title":"Definition","text":"
    string path\n
    "},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/DiagNodeStruct/#this-type-uses","title":"This type uses","text":"
    • None
    "},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/DiagNodeStruct/#this-type-is-used-by","title":"This type is used by","text":"
    • autoware_adapi_v1_msgs/msg/DiagGraphStruct
    "},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/DoorCommand/","title":"autoware_adapi_v1_msgs/msg/DoorCommand","text":""},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/DoorCommand/#autoware_adapi_v1_msgsmsgdoorcommand","title":"autoware_adapi_v1_msgs/msg/DoorCommand","text":""},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/DoorCommand/#definition","title":"Definition","text":"
    uint8 OPEN = 1\nuint8 CLOSE = 2\n\nuint32 index\nuint8 command\n
    "},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/DoorCommand/#this-type-uses","title":"This type uses","text":"
    • None
    "},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/DoorCommand/#this-type-is-used-by","title":"This type is used by","text":"
    • autoware_adapi_v1_msgs/srv/SetDoorCommand
    "},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/DoorLayout/","title":"autoware_adapi_v1_msgs/msg/DoorLayout","text":""},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/DoorLayout/#autoware_adapi_v1_msgsmsgdoorlayout","title":"autoware_adapi_v1_msgs/msg/DoorLayout","text":""},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/DoorLayout/#definition","title":"Definition","text":"
    uint8 GET_ON = 1\nuint8 GET_OFF = 2\n\nuint8[] roles\nstring description\n
    "},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/DoorLayout/#this-type-uses","title":"This type uses","text":"
    • None
    "},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/DoorLayout/#this-type-is-used-by","title":"This type is used by","text":"
    • autoware_adapi_v1_msgs/srv/GetDoorLayout
    "},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/DoorStatus/","title":"autoware_adapi_v1_msgs/msg/DoorStatus","text":""},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/DoorStatus/#autoware_adapi_v1_msgsmsgdoorstatus","title":"autoware_adapi_v1_msgs/msg/DoorStatus","text":""},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/DoorStatus/#definition","title":"Definition","text":"
    uint8 UNKNOWN = 0\nuint8 NOT_AVAILABLE = 1\nuint8 OPENED = 2\nuint8 CLOSED = 3\nuint8 OPENING = 4\nuint8 CLOSING = 5\n\nuint8 status\n
    "},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/DoorStatus/#this-type-uses","title":"This type uses","text":"
    • None
    "},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/DoorStatus/#this-type-is-used-by","title":"This type is used by","text":"
    • autoware_adapi_v1_msgs/msg/DoorStatusArray
    "},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/DoorStatusArray/","title":"autoware_adapi_v1_msgs/msg/DoorStatusArray","text":""},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/DoorStatusArray/#autoware_adapi_v1_msgsmsgdoorstatusarray","title":"autoware_adapi_v1_msgs/msg/DoorStatusArray","text":""},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/DoorStatusArray/#definition","title":"Definition","text":"
    builtin_interfaces/Time stamp\nautoware_adapi_v1_msgs/DoorStatus[] doors\n
    "},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/DoorStatusArray/#this-type-uses","title":"This type uses","text":"
    • autoware_adapi_v1_msgs/msg/DoorStatus
    "},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/DoorStatusArray/#this-type-is-used-by","title":"This type is used by","text":"
    • None
    "},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/DynamicObject/","title":"autoware_adapi_v1_msgs/msg/DynamicObject","text":""},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/DynamicObject/#autoware_adapi_v1_msgsmsgdynamicobject","title":"autoware_adapi_v1_msgs/msg/DynamicObject","text":""},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/DynamicObject/#definition","title":"Definition","text":"
    unique_identifier_msgs/UUID id\nfloat64 existence_probability\nautoware_adapi_v1_msgs/ObjectClassification[] classification\nautoware_adapi_v1_msgs/DynamicObjectKinematics kinematics\nshape_msgs/SolidPrimitive shape\n
    "},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/DynamicObject/#this-type-uses","title":"This type uses","text":"
    • autoware_adapi_v1_msgs/msg/DynamicObjectKinematics
    • autoware_adapi_v1_msgs/msg/ObjectClassification
    "},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/DynamicObject/#this-type-is-used-by","title":"This type is used by","text":"
    • autoware_adapi_v1_msgs/msg/DynamicObjectArray
    "},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/DynamicObjectArray/","title":"autoware_adapi_v1_msgs/msg/DynamicObjectArray","text":""},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/DynamicObjectArray/#autoware_adapi_v1_msgsmsgdynamicobjectarray","title":"autoware_adapi_v1_msgs/msg/DynamicObjectArray","text":""},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/DynamicObjectArray/#definition","title":"Definition","text":"
    std_msgs/Header header\nautoware_adapi_v1_msgs/DynamicObject[] objects\n
    "},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/DynamicObjectArray/#this-type-uses","title":"This type uses","text":"
    • autoware_adapi_v1_msgs/msg/DynamicObject
    "},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/DynamicObjectArray/#this-type-is-used-by","title":"This type is used by","text":"
    • None
    "},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/DynamicObjectKinematics/","title":"autoware_adapi_v1_msgs/msg/DynamicObjectKinematics","text":""},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/DynamicObjectKinematics/#autoware_adapi_v1_msgsmsgdynamicobjectkinematics","title":"autoware_adapi_v1_msgs/msg/DynamicObjectKinematics","text":""},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/DynamicObjectKinematics/#definition","title":"Definition","text":"
    geometry_msgs/Pose pose\ngeometry_msgs/Twist twist\ngeometry_msgs/Accel accel\n\nautoware_adapi_v1_msgs/DynamicObjectPath[] predicted_paths\n
    "},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/DynamicObjectKinematics/#this-type-uses","title":"This type uses","text":"
    • autoware_adapi_v1_msgs/msg/DynamicObjectPath
    "},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/DynamicObjectKinematics/#this-type-is-used-by","title":"This type is used by","text":"
    • autoware_adapi_v1_msgs/msg/DynamicObject
    "},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/DynamicObjectPath/","title":"autoware_adapi_v1_msgs/msg/DynamicObjectPath","text":""},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/DynamicObjectPath/#autoware_adapi_v1_msgsmsgdynamicobjectpath","title":"autoware_adapi_v1_msgs/msg/DynamicObjectPath","text":""},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/DynamicObjectPath/#definition","title":"Definition","text":"
    geometry_msgs/Pose[] path\nbuiltin_interfaces/Duration time_step\nfloat64 confidence\n
    "},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/DynamicObjectPath/#this-type-uses","title":"This type uses","text":"
    • None
    "},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/DynamicObjectPath/#this-type-is-used-by","title":"This type is used by","text":"
    • autoware_adapi_v1_msgs/msg/DynamicObjectKinematics
    "},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/Gear/","title":"autoware_adapi_v1_msgs/msg/Gear","text":""},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/Gear/#autoware_adapi_v1_msgsmsggear","title":"autoware_adapi_v1_msgs/msg/Gear","text":""},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/Gear/#definition","title":"Definition","text":"
    # constants\nuint8 UNKNOWN = 0\nuint8 NEUTRAL = 1\nuint8 DRIVE = 2\nuint8 REVERSE = 3\nuint8 PARK = 4\nuint8 LOW = 5\n\nuint8 status\n
    "},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/Gear/#this-type-uses","title":"This type uses","text":"
    • None
    "},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/Gear/#this-type-is-used-by","title":"This type is used by","text":"
    • autoware_adapi_v1_msgs/msg/GearCommand
    • autoware_adapi_v1_msgs/msg/VehicleStatus
    "},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/GearCommand/","title":"autoware_adapi_v1_msgs/msg/GearCommand","text":""},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/GearCommand/#autoware_adapi_v1_msgsmsggearcommand","title":"autoware_adapi_v1_msgs/msg/GearCommand","text":""},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/GearCommand/#definition","title":"Definition","text":"
    builtin_interfaces/Time stamp\nautoware_adapi_v1_msgs/Gear command\n
    "},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/GearCommand/#this-type-uses","title":"This type uses","text":"
    • autoware_adapi_v1_msgs/msg/Gear
    "},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/GearCommand/#this-type-is-used-by","title":"This type is used by","text":"
    • None
    "},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/HazardLights/","title":"autoware_adapi_v1_msgs/msg/HazardLights","text":""},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/HazardLights/#autoware_adapi_v1_msgsmsghazardlights","title":"autoware_adapi_v1_msgs/msg/HazardLights","text":""},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/HazardLights/#definition","title":"Definition","text":"
    # constants\nuint8 UNKNOWN = 0\nuint8 DISABLE = 1\nuint8 ENABLE = 2\n\nuint8 status\n
    "},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/HazardLights/#this-type-uses","title":"This type uses","text":"
    • None
    "},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/HazardLights/#this-type-is-used-by","title":"This type is used by","text":"
    • autoware_adapi_v1_msgs/msg/HazardLightsCommand
    • autoware_adapi_v1_msgs/msg/VehicleStatus
    "},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/HazardLightsCommand/","title":"autoware_adapi_v1_msgs/msg/HazardLightsCommand","text":""},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/HazardLightsCommand/#autoware_adapi_v1_msgsmsghazardlightscommand","title":"autoware_adapi_v1_msgs/msg/HazardLightsCommand","text":""},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/HazardLightsCommand/#definition","title":"Definition","text":"
    builtin_interfaces/Time stamp\nautoware_adapi_v1_msgs/HazardLights command\n
    "},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/HazardLightsCommand/#this-type-uses","title":"This type uses","text":"
    • autoware_adapi_v1_msgs/msg/HazardLights
    "},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/HazardLightsCommand/#this-type-is-used-by","title":"This type is used by","text":"
    • None
    "},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/Heartbeat/","title":"autoware_adapi_v1_msgs/msg/Heartbeat","text":""},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/Heartbeat/#autoware_adapi_v1_msgsmsgheartbeat","title":"autoware_adapi_v1_msgs/msg/Heartbeat","text":""},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/Heartbeat/#definition","title":"Definition","text":"
    # Timestamp in Autoware for delay checking.\nbuiltin_interfaces/Time stamp\n\n# Sequence number for order verification, wraps at 65535.\nuint16 seq\n
    "},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/Heartbeat/#this-type-uses","title":"This type uses","text":"
    • None
    "},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/Heartbeat/#this-type-is-used-by","title":"This type is used by","text":"
    • None
    "},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/LocalizationInitializationState/","title":"autoware_adapi_v1_msgs/msg/LocalizationInitializationState","text":""},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/LocalizationInitializationState/#autoware_adapi_v1_msgsmsglocalizationinitializationstate","title":"autoware_adapi_v1_msgs/msg/LocalizationInitializationState","text":""},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/LocalizationInitializationState/#definition","title":"Definition","text":"
    uint16 UNKNOWN = 0\nuint16 UNINITIALIZED = 1\nuint16 INITIALIZING = 2\nuint16 INITIALIZED = 3\n\nbuiltin_interfaces/Time stamp\nuint16 state\n
    "},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/LocalizationInitializationState/#this-type-uses","title":"This type uses","text":"
    • None
    "},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/LocalizationInitializationState/#this-type-is-used-by","title":"This type is used by","text":"
    • None
    "},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/ManualControlMode/","title":"autoware_adapi_v1_msgs/msg/ManualControlMode","text":""},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/ManualControlMode/#autoware_adapi_v1_msgsmsgmanualcontrolmode","title":"autoware_adapi_v1_msgs/msg/ManualControlMode","text":""},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/ManualControlMode/#definition","title":"Definition","text":"
    uint8 DISABLED = 1\nuint8 PEDAL = 2\nuint8 ACCELERATION = 3\n\nuint8 mode\n
    "},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/ManualControlMode/#this-type-uses","title":"This type uses","text":"
    • None
    "},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/ManualControlMode/#this-type-is-used-by","title":"This type is used by","text":"
    • autoware_adapi_v1_msgs/msg/ManualControlModeStatus
    • autoware_adapi_v1_msgs/srv/ListManualControlMode
    • autoware_adapi_v1_msgs/srv/SelectManualControlMode
    "},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/ManualControlModeStatus/","title":"autoware_adapi_v1_msgs/msg/ManualControlModeStatus","text":""},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/ManualControlModeStatus/#autoware_adapi_v1_msgsmsgmanualcontrolmodestatus","title":"autoware_adapi_v1_msgs/msg/ManualControlModeStatus","text":""},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/ManualControlModeStatus/#definition","title":"Definition","text":"
    builtin_interfaces/Time stamp\nautoware_adapi_v1_msgs/ManualControlMode mode\n
    "},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/ManualControlModeStatus/#this-type-uses","title":"This type uses","text":"
    • autoware_adapi_v1_msgs/msg/ManualControlMode
    "},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/ManualControlModeStatus/#this-type-is-used-by","title":"This type is used by","text":"
    • None
    "},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/ManualOperatorStatus/","title":"autoware_adapi_v1_msgs/msg/ManualOperatorStatus","text":""},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/ManualOperatorStatus/#autoware_adapi_v1_msgsmsgmanualoperatorstatus","title":"autoware_adapi_v1_msgs/msg/ManualOperatorStatus","text":""},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/ManualOperatorStatus/#definition","title":"Definition","text":"
    builtin_interfaces/Time stamp\nbool ready\n
    "},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/ManualOperatorStatus/#this-type-uses","title":"This type uses","text":"
    • None
    "},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/ManualOperatorStatus/#this-type-is-used-by","title":"This type is used by","text":"
    • None
    "},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/MotionState/","title":"autoware_adapi_v1_msgs/msg/MotionState","text":""},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/MotionState/#autoware_adapi_v1_msgsmsgmotionstate","title":"autoware_adapi_v1_msgs/msg/MotionState","text":""},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/MotionState/#definition","title":"Definition","text":"
    uint16 UNKNOWN = 0\nuint16 STOPPED = 1\nuint16 STARTING = 2\nuint16 MOVING = 3\n\nbuiltin_interfaces/Time stamp\nuint16 state\n
    "},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/MotionState/#this-type-uses","title":"This type uses","text":"
    • None
    "},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/MotionState/#this-type-is-used-by","title":"This type is used by","text":"
    • None
    "},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/MrmState/","title":"autoware_adapi_v1_msgs/msg/MrmState","text":""},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/MrmState/#autoware_adapi_v1_msgsmsgmrmstate","title":"autoware_adapi_v1_msgs/msg/MrmState","text":""},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/MrmState/#definition","title":"Definition","text":"
    builtin_interfaces/Time stamp\n\n# For common use\nuint16 UNKNOWN = 0\n\n# For state\nuint16 NORMAL = 1\nuint16 MRM_OPERATING = 2\nuint16 MRM_SUCCEEDED = 3\nuint16 MRM_FAILED = 4\n\n# For behavior\nuint16 NONE = 1\nuint16 EMERGENCY_STOP = 2\nuint16 COMFORTABLE_STOP = 3\nuint16 PULL_OVER = 4\n\nuint16 state\nuint16 behavior\n
    "},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/MrmState/#this-type-uses","title":"This type uses","text":"
    • None
    "},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/MrmState/#this-type-is-used-by","title":"This type is used by","text":"
    • None
    "},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/ObjectClassification/","title":"autoware_adapi_v1_msgs/msg/ObjectClassification","text":""},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/ObjectClassification/#autoware_adapi_v1_msgsmsgobjectclassification","title":"autoware_adapi_v1_msgs/msg/ObjectClassification","text":""},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/ObjectClassification/#definition","title":"Definition","text":"
    uint8 UNKNOWN=0\nuint8 CAR=1\nuint8 TRUCK=2\nuint8 BUS=3\nuint8 TRAILER = 4\nuint8 MOTORCYCLE = 5\nuint8 BICYCLE = 6\nuint8 PEDESTRIAN = 7\n\nuint8 label\nfloat64 probability\n
    "},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/ObjectClassification/#this-type-uses","title":"This type uses","text":"
    • None
    "},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/ObjectClassification/#this-type-is-used-by","title":"This type is used by","text":"
    • autoware_adapi_v1_msgs/msg/DynamicObject
    "},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/OperationModeState/","title":"autoware_adapi_v1_msgs/msg/OperationModeState","text":""},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/OperationModeState/#autoware_adapi_v1_msgsmsgoperationmodestate","title":"autoware_adapi_v1_msgs/msg/OperationModeState","text":""},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/OperationModeState/#definition","title":"Definition","text":"
    # constants for mode\nuint8 UNKNOWN = 0\nuint8 STOP = 1\nuint8 AUTONOMOUS = 2\nuint8 LOCAL = 3\nuint8 REMOTE = 4\n\n# variables\nbuiltin_interfaces/Time stamp\nuint8 mode\nbool is_autoware_control_enabled\nbool is_in_transition\nbool is_stop_mode_available\nbool is_autonomous_mode_available\nbool is_local_mode_available\nbool is_remote_mode_available\n
    "},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/OperationModeState/#this-type-uses","title":"This type uses","text":"
    • None
    "},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/OperationModeState/#this-type-is-used-by","title":"This type is used by","text":"
    • None
    "},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/PedalCommand/","title":"autoware_adapi_v1_msgs/msg/PedalCommand","text":""},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/PedalCommand/#autoware_adapi_v1_msgsmsgpedalcommand","title":"autoware_adapi_v1_msgs/msg/PedalCommand","text":""},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/PedalCommand/#definition","title":"Definition","text":"
    builtin_interfaces/Time stamp\nfloat32 accelerator\nfloat32 brake\n
    "},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/PedalCommand/#this-type-uses","title":"This type uses","text":"
    • None
    "},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/PedalCommand/#this-type-is-used-by","title":"This type is used by","text":"
    • None
    "},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/ResponseStatus/","title":"autoware_adapi_v1_msgs/msg/ResponseStatus","text":""},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/ResponseStatus/#autoware_adapi_v1_msgsmsgresponsestatus","title":"autoware_adapi_v1_msgs/msg/ResponseStatus","text":""},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/ResponseStatus/#definition","title":"Definition","text":"
    # error code\nuint16 UNKNOWN = 50000\nuint16 SERVICE_UNREADY = 50001\nuint16 SERVICE_TIMEOUT = 50002\nuint16 TRANSFORM_ERROR = 50003\nuint16 PARAMETER_ERROR = 50004\n\n# warning code\nuint16 DEPRECATED = 60000\nuint16 NO_EFFECT = 60001\n\n# variables\nbool   success\nuint16 code\nstring message\n
    "},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/ResponseStatus/#this-type-uses","title":"This type uses","text":"
    • None
    "},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/ResponseStatus/#this-type-is-used-by","title":"This type is used by","text":"
    • autoware_adapi_v1_msgs/srv/AcceptStart
    • autoware_adapi_v1_msgs/srv/ChangeOperationMode
    • autoware_adapi_v1_msgs/srv/ClearRoute
    • autoware_adapi_v1_msgs/srv/GetCooperationPolicies
    • autoware_adapi_v1_msgs/srv/GetDoorLayout
    • autoware_adapi_v1_msgs/srv/GetVehicleDimensions
    • autoware_adapi_v1_msgs/srv/InitializeLocalization
    • autoware_adapi_v1_msgs/srv/ListManualControlMode
    • autoware_adapi_v1_msgs/srv/SelectManualControlMode
    • autoware_adapi_v1_msgs/srv/SetCooperationCommands
    • autoware_adapi_v1_msgs/srv/SetCooperationPolicies
    • autoware_adapi_v1_msgs/srv/SetDoorCommand
    • autoware_adapi_v1_msgs/srv/SetRoute
    • autoware_adapi_v1_msgs/srv/SetRoutePoints
    "},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/Route/","title":"autoware_adapi_v1_msgs/msg/Route","text":""},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/Route/#autoware_adapi_v1_msgsmsgroute","title":"autoware_adapi_v1_msgs/msg/Route","text":""},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/Route/#definition","title":"Definition","text":"
    std_msgs/Header header\nautoware_adapi_v1_msgs/RouteData[<=1] data\n
    "},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/Route/#this-type-uses","title":"This type uses","text":"
    • autoware_adapi_v1_msgs/msg/RouteData
    "},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/Route/#this-type-is-used-by","title":"This type is used by","text":"
    • None
    "},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/RouteData/","title":"autoware_adapi_v1_msgs/msg/RouteData","text":""},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/RouteData/#autoware_adapi_v1_msgsmsgroutedata","title":"autoware_adapi_v1_msgs/msg/RouteData","text":""},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/RouteData/#definition","title":"Definition","text":"
    geometry_msgs/Pose start\ngeometry_msgs/Pose goal\nautoware_adapi_v1_msgs/RouteSegment[] segments\n
    "},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/RouteData/#this-type-uses","title":"This type uses","text":"
    • autoware_adapi_v1_msgs/msg/RouteSegment
    "},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/RouteData/#this-type-is-used-by","title":"This type is used by","text":"
    • autoware_adapi_v1_msgs/msg/Route
    "},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/RouteOption/","title":"autoware_adapi_v1_msgs/msg/RouteOption","text":""},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/RouteOption/#autoware_adapi_v1_msgsmsgrouteoption","title":"autoware_adapi_v1_msgs/msg/RouteOption","text":""},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/RouteOption/#definition","title":"Definition","text":"
    # Please refer to the following pages for details on each option.\n# https://autowarefoundation.github.io/autoware-documentation/main/design/autoware-interfaces/ad-api/features/routing/\n\nbool allow_goal_modification\nbool allow_while_using_route\n
    "},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/RouteOption/#this-type-uses","title":"This type uses","text":"
    • None
    "},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/RouteOption/#this-type-is-used-by","title":"This type is used by","text":"
    • autoware_adapi_v1_msgs/srv/SetRoute
    • autoware_adapi_v1_msgs/srv/SetRoutePoints
    "},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/RoutePrimitive/","title":"autoware_adapi_v1_msgs/msg/RoutePrimitive","text":""},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/RoutePrimitive/#autoware_adapi_v1_msgsmsgrouteprimitive","title":"autoware_adapi_v1_msgs/msg/RoutePrimitive","text":""},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/RoutePrimitive/#definition","title":"Definition","text":"
    int64 id\nstring type  # The same id may be used for each type.\n
    "},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/RoutePrimitive/#this-type-uses","title":"This type uses","text":"
    • None
    "},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/RoutePrimitive/#this-type-is-used-by","title":"This type is used by","text":"
    • autoware_adapi_v1_msgs/msg/RouteSegment
    "},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/RouteSegment/","title":"autoware_adapi_v1_msgs/msg/RouteSegment","text":""},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/RouteSegment/#autoware_adapi_v1_msgsmsgroutesegment","title":"autoware_adapi_v1_msgs/msg/RouteSegment","text":""},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/RouteSegment/#definition","title":"Definition","text":"
    autoware_adapi_v1_msgs/RoutePrimitive   preferred\nautoware_adapi_v1_msgs/RoutePrimitive[] alternatives  # Does not include the preferred primitive.\n
    "},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/RouteSegment/#this-type-uses","title":"This type uses","text":"
    • autoware_adapi_v1_msgs/msg/RoutePrimitive
    "},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/RouteSegment/#this-type-is-used-by","title":"This type is used by","text":"
    • autoware_adapi_v1_msgs/msg/RouteData
    • autoware_adapi_v1_msgs/srv/SetRoute
    "},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/RouteState/","title":"autoware_adapi_v1_msgs/msg/RouteState","text":""},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/RouteState/#autoware_adapi_v1_msgsmsgroutestate","title":"autoware_adapi_v1_msgs/msg/RouteState","text":""},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/RouteState/#definition","title":"Definition","text":"
    uint16 UNKNOWN = 0\nuint16 UNSET = 1\nuint16 SET = 2\nuint16 ARRIVED = 3\nuint16 CHANGING = 4\n\nbuiltin_interfaces/Time stamp\nuint16 state\n
    "},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/RouteState/#this-type-uses","title":"This type uses","text":"
    • None
    "},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/RouteState/#this-type-is-used-by","title":"This type is used by","text":"
    • None
    "},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/SteeringCommand/","title":"autoware_adapi_v1_msgs/msg/SteeringCommand","text":""},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/SteeringCommand/#autoware_adapi_v1_msgsmsgsteeringcommand","title":"autoware_adapi_v1_msgs/msg/SteeringCommand","text":""},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/SteeringCommand/#definition","title":"Definition","text":"
    builtin_interfaces/Time stamp\nfloat32 steering_tire_angle\n
    "},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/SteeringCommand/#this-type-uses","title":"This type uses","text":"
    • None
    "},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/SteeringCommand/#this-type-is-used-by","title":"This type is used by","text":"
    • None
    "},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/SteeringFactor/","title":"autoware_adapi_v1_msgs/msg/SteeringFactor","text":""},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/SteeringFactor/#autoware_adapi_v1_msgsmsgsteeringfactor","title":"autoware_adapi_v1_msgs/msg/SteeringFactor","text":""},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/SteeringFactor/#definition","title":"Definition","text":"
    # constants for common use\nuint16 UNKNOWN = 0\n\n# constants for direction\nuint16 LEFT = 1\nuint16 RIGHT = 2\nuint16 STRAIGHT = 3\n\n# constants for status\nuint16 APPROACHING = 1\nuint16 TURNING = 3\n\n# variables\ngeometry_msgs/Pose[2] pose\nfloat32[2] distance\nuint16 direction\nuint16 status\nstring behavior\nstring sequence\nstring detail\nautoware_adapi_v1_msgs/CooperationStatus[<=1] cooperation\n
    "},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/SteeringFactor/#this-type-uses","title":"This type uses","text":"
    • autoware_adapi_v1_msgs/msg/CooperationStatus
    "},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/SteeringFactor/#this-type-is-used-by","title":"This type is used by","text":"
    • autoware_adapi_v1_msgs/msg/SteeringFactorArray
    "},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/SteeringFactorArray/","title":"autoware_adapi_v1_msgs/msg/SteeringFactorArray","text":""},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/SteeringFactorArray/#autoware_adapi_v1_msgsmsgsteeringfactorarray","title":"autoware_adapi_v1_msgs/msg/SteeringFactorArray","text":""},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/SteeringFactorArray/#definition","title":"Definition","text":"
    std_msgs/Header header\nautoware_adapi_v1_msgs/SteeringFactor[] factors\n
    "},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/SteeringFactorArray/#this-type-uses","title":"This type uses","text":"
    • autoware_adapi_v1_msgs/msg/SteeringFactor
    "},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/SteeringFactorArray/#this-type-is-used-by","title":"This type is used by","text":"
    • None
    "},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/TurnIndicators/","title":"autoware_adapi_v1_msgs/msg/TurnIndicators","text":""},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/TurnIndicators/#autoware_adapi_v1_msgsmsgturnindicators","title":"autoware_adapi_v1_msgs/msg/TurnIndicators","text":""},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/TurnIndicators/#definition","title":"Definition","text":"
    # constants\nuint8 UNKNOWN = 0\nuint8 DISABLE = 1\nuint8 LEFT = 2\nuint8 RIGHT = 3\n\nuint8 status\n
    "},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/TurnIndicators/#this-type-uses","title":"This type uses","text":"
    • None
    "},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/TurnIndicators/#this-type-is-used-by","title":"This type is used by","text":"
    • autoware_adapi_v1_msgs/msg/TurnIndicatorsCommand
    • autoware_adapi_v1_msgs/msg/VehicleStatus
    "},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/TurnIndicatorsCommand/","title":"autoware_adapi_v1_msgs/msg/TurnIndicatorsCommand","text":""},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/TurnIndicatorsCommand/#autoware_adapi_v1_msgsmsgturnindicatorscommand","title":"autoware_adapi_v1_msgs/msg/TurnIndicatorsCommand","text":""},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/TurnIndicatorsCommand/#definition","title":"Definition","text":"
    builtin_interfaces/Time stamp\nautoware_adapi_v1_msgs/TurnIndicators command\n
    "},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/TurnIndicatorsCommand/#this-type-uses","title":"This type uses","text":"
    • autoware_adapi_v1_msgs/msg/TurnIndicators
    "},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/TurnIndicatorsCommand/#this-type-is-used-by","title":"This type is used by","text":"
    • None
    "},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/VehicleDimensions/","title":"autoware_adapi_v1_msgs/msg/VehicleDimensions","text":""},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/VehicleDimensions/#autoware_adapi_v1_msgsmsgvehicledimensions","title":"autoware_adapi_v1_msgs/msg/VehicleDimensions","text":""},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/VehicleDimensions/#definition","title":"Definition","text":"
    float32 wheel_radius\nfloat32 wheel_width\nfloat32 wheel_base\nfloat32 wheel_tread\nfloat32 front_overhang\nfloat32 rear_overhang\nfloat32 left_overhang\nfloat32 right_overhang\nfloat32 height\ngeometry_msgs/Polygon footprint\n
    "},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/VehicleDimensions/#this-type-uses","title":"This type uses","text":"
    • None
    "},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/VehicleDimensions/#this-type-is-used-by","title":"This type is used by","text":"
    • autoware_adapi_v1_msgs/srv/GetVehicleDimensions
    "},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/VehicleKinematics/","title":"autoware_adapi_v1_msgs/msg/VehicleKinematics","text":""},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/VehicleKinematics/#autoware_adapi_v1_msgsmsgvehiclekinematics","title":"autoware_adapi_v1_msgs/msg/VehicleKinematics","text":""},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/VehicleKinematics/#definition","title":"Definition","text":"
    # Geographic point, using the WGS 84 reference ellipsoid.\n# This data will be invalid If Autoware does not provide projection information between geographic coordinates and local coordinates.\ngeographic_msgs/GeoPointStamped geographic_pose\n\n# Local coordinate from the autoware\ngeometry_msgs/PoseWithCovarianceStamped pose\ngeometry_msgs/TwistWithCovarianceStamped twist\ngeometry_msgs/AccelWithCovarianceStamped accel\n
    "},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/VehicleKinematics/#this-type-uses","title":"This type uses","text":"
    • None
    "},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/VehicleKinematics/#this-type-is-used-by","title":"This type is used by","text":"
    • None
    "},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/VehicleStatus/","title":"autoware_adapi_v1_msgs/msg/VehicleStatus","text":""},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/VehicleStatus/#autoware_adapi_v1_msgsmsgvehiclestatus","title":"autoware_adapi_v1_msgs/msg/VehicleStatus","text":""},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/VehicleStatus/#definition","title":"Definition","text":"
    builtin_interfaces/Time stamp\nautoware_adapi_v1_msgs/Gear gear\nautoware_adapi_v1_msgs/TurnIndicators turn_indicators\nautoware_adapi_v1_msgs/HazardLights hazard_lights\nfloat64 steering_tire_angle\n
    "},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/VehicleStatus/#this-type-uses","title":"This type uses","text":"
    • autoware_adapi_v1_msgs/msg/Gear
    • autoware_adapi_v1_msgs/msg/HazardLights
    • autoware_adapi_v1_msgs/msg/TurnIndicators
    "},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/VehicleStatus/#this-type-is-used-by","title":"This type is used by","text":"
    • None
    "},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/VelocityFactor/","title":"autoware_adapi_v1_msgs/msg/VelocityFactor","text":""},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/VelocityFactor/#autoware_adapi_v1_msgsmsgvelocityfactor","title":"autoware_adapi_v1_msgs/msg/VelocityFactor","text":""},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/VelocityFactor/#definition","title":"Definition","text":"
    # constants for common use\nuint16 UNKNOWN = 0\n\n# constants for status\nuint16 APPROACHING = 1\nuint16 STOPPED = 2\n\n# variables\ngeometry_msgs/Pose pose\nfloat32 distance\nuint16 status\nstring behavior\nstring sequence\nstring detail\nautoware_adapi_v1_msgs/CooperationStatus[<=1] cooperation\n
    "},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/VelocityFactor/#this-type-uses","title":"This type uses","text":"
    • autoware_adapi_v1_msgs/msg/CooperationStatus
    "},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/VelocityFactor/#this-type-is-used-by","title":"This type is used by","text":"
    • autoware_adapi_v1_msgs/msg/VelocityFactorArray
    "},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/VelocityFactorArray/","title":"autoware_adapi_v1_msgs/msg/VelocityFactorArray","text":""},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/VelocityFactorArray/#autoware_adapi_v1_msgsmsgvelocityfactorarray","title":"autoware_adapi_v1_msgs/msg/VelocityFactorArray","text":""},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/VelocityFactorArray/#definition","title":"Definition","text":"
    std_msgs/Header header\nautoware_adapi_v1_msgs/VelocityFactor[] factors\n
    "},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/VelocityFactorArray/#this-type-uses","title":"This type uses","text":"
    • autoware_adapi_v1_msgs/msg/VelocityFactor
    "},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/VelocityFactorArray/#this-type-is-used-by","title":"This type is used by","text":"
    • None
    "},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/srv/AcceptStart/","title":"autoware_adapi_v1_msgs/srv/AcceptStart","text":""},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/srv/AcceptStart/#autoware_adapi_v1_msgssrvacceptstart","title":"autoware_adapi_v1_msgs/srv/AcceptStart","text":""},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/srv/AcceptStart/#definition","title":"Definition","text":"
    ---\nuint16 ERROR_NOT_STARTING = 1\nautoware_adapi_v1_msgs/ResponseStatus status\n
    "},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/srv/AcceptStart/#this-type-uses","title":"This type uses","text":"
    • autoware_adapi_v1_msgs/msg/ResponseStatus
    "},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/srv/AcceptStart/#this-type-is-used-by","title":"This type is used by","text":"
    • None
    "},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/srv/ChangeOperationMode/","title":"autoware_adapi_v1_msgs/srv/ChangeOperationMode","text":""},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/srv/ChangeOperationMode/#autoware_adapi_v1_msgssrvchangeoperationmode","title":"autoware_adapi_v1_msgs/srv/ChangeOperationMode","text":""},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/srv/ChangeOperationMode/#definition","title":"Definition","text":"
    ---\nuint16 ERROR_NOT_AVAILABLE = 1\nuint16 ERROR_IN_TRANSITION = 2\nautoware_adapi_v1_msgs/ResponseStatus status\n
    "},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/srv/ChangeOperationMode/#this-type-uses","title":"This type uses","text":"
    • autoware_adapi_v1_msgs/msg/ResponseStatus
    "},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/srv/ChangeOperationMode/#this-type-is-used-by","title":"This type is used by","text":"
    • None
    "},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/srv/ClearRoute/","title":"autoware_adapi_v1_msgs/srv/ClearRoute","text":""},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/srv/ClearRoute/#autoware_adapi_v1_msgssrvclearroute","title":"autoware_adapi_v1_msgs/srv/ClearRoute","text":""},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/srv/ClearRoute/#definition","title":"Definition","text":"
    ---\nautoware_adapi_v1_msgs/ResponseStatus status\n
    "},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/srv/ClearRoute/#this-type-uses","title":"This type uses","text":"
    • autoware_adapi_v1_msgs/msg/ResponseStatus
    "},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/srv/ClearRoute/#this-type-is-used-by","title":"This type is used by","text":"
    • None
    "},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/srv/GetCooperationPolicies/","title":"autoware_adapi_v1_msgs/srv/GetCooperationPolicies","text":""},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/srv/GetCooperationPolicies/#autoware_adapi_v1_msgssrvgetcooperationpolicies","title":"autoware_adapi_v1_msgs/srv/GetCooperationPolicies","text":""},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/srv/GetCooperationPolicies/#definition","title":"Definition","text":"
    ---\nautoware_adapi_v1_msgs/ResponseStatus status\nautoware_adapi_v1_msgs/CooperationPolicy[] policies\n
    "},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/srv/GetCooperationPolicies/#this-type-uses","title":"This type uses","text":"
    • autoware_adapi_v1_msgs/msg/CooperationPolicy
    • autoware_adapi_v1_msgs/msg/ResponseStatus
    "},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/srv/GetCooperationPolicies/#this-type-is-used-by","title":"This type is used by","text":"
    • None
    "},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/srv/GetDoorLayout/","title":"autoware_adapi_v1_msgs/srv/GetDoorLayout","text":""},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/srv/GetDoorLayout/#autoware_adapi_v1_msgssrvgetdoorlayout","title":"autoware_adapi_v1_msgs/srv/GetDoorLayout","text":""},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/srv/GetDoorLayout/#definition","title":"Definition","text":"
    ---\nautoware_adapi_v1_msgs/ResponseStatus status\nautoware_adapi_v1_msgs/DoorLayout[] doors\n
    "},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/srv/GetDoorLayout/#this-type-uses","title":"This type uses","text":"
    • autoware_adapi_v1_msgs/msg/DoorLayout
    • autoware_adapi_v1_msgs/msg/ResponseStatus
    "},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/srv/GetDoorLayout/#this-type-is-used-by","title":"This type is used by","text":"
    • None
    "},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/srv/GetVehicleDimensions/","title":"autoware_adapi_v1_msgs/srv/GetVehicleDimensions","text":""},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/srv/GetVehicleDimensions/#autoware_adapi_v1_msgssrvgetvehicledimensions","title":"autoware_adapi_v1_msgs/srv/GetVehicleDimensions","text":""},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/srv/GetVehicleDimensions/#definition","title":"Definition","text":"
    ---\nautoware_adapi_v1_msgs/ResponseStatus status\nautoware_adapi_v1_msgs/VehicleDimensions dimensions\n
    "},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/srv/GetVehicleDimensions/#this-type-uses","title":"This type uses","text":"
    • autoware_adapi_v1_msgs/msg/ResponseStatus
    • autoware_adapi_v1_msgs/msg/VehicleDimensions
    "},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/srv/GetVehicleDimensions/#this-type-is-used-by","title":"This type is used by","text":"
    • None
    "},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/srv/InitializeLocalization/","title":"autoware_adapi_v1_msgs/srv/InitializeLocalization","text":""},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/srv/InitializeLocalization/#autoware_adapi_v1_msgssrvinitializelocalization","title":"autoware_adapi_v1_msgs/srv/InitializeLocalization","text":""},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/srv/InitializeLocalization/#definition","title":"Definition","text":"
    geometry_msgs/PoseWithCovarianceStamped[<=1] pose\n---\nuint16 ERROR_UNSAFE = 1\nuint16 ERROR_GNSS_SUPPORT = 2\nuint16 ERROR_GNSS = 3\nuint16 ERROR_ESTIMATION = 4\nautoware_adapi_v1_msgs/ResponseStatus status\n
    "},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/srv/InitializeLocalization/#this-type-uses","title":"This type uses","text":"
    • autoware_adapi_v1_msgs/msg/ResponseStatus
    "},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/srv/InitializeLocalization/#this-type-is-used-by","title":"This type is used by","text":"
    • None
    "},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/srv/ListManualControlMode/","title":"autoware_adapi_v1_msgs/srv/ListManualControlMode","text":""},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/srv/ListManualControlMode/#autoware_adapi_v1_msgssrvlistmanualcontrolmode","title":"autoware_adapi_v1_msgs/srv/ListManualControlMode","text":""},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/srv/ListManualControlMode/#definition","title":"Definition","text":"
    ---\nautoware_adapi_v1_msgs/ResponseStatus status\nautoware_adapi_v1_msgs/ManualControlMode[] modes\n
    "},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/srv/ListManualControlMode/#this-type-uses","title":"This type uses","text":"
    • autoware_adapi_v1_msgs/msg/ManualControlMode
    • autoware_adapi_v1_msgs/msg/ResponseStatus
    "},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/srv/ListManualControlMode/#this-type-is-used-by","title":"This type is used by","text":"
    • None
    "},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/srv/SelectManualControlMode/","title":"autoware_adapi_v1_msgs/srv/SelectManualControlMode","text":""},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/srv/SelectManualControlMode/#autoware_adapi_v1_msgssrvselectmanualcontrolmode","title":"autoware_adapi_v1_msgs/srv/SelectManualControlMode","text":""},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/srv/SelectManualControlMode/#definition","title":"Definition","text":"
    autoware_adapi_v1_msgs/ManualControlMode mode\n---\nautoware_adapi_v1_msgs/ResponseStatus status\n
    "},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/srv/SelectManualControlMode/#this-type-uses","title":"This type uses","text":"
    • autoware_adapi_v1_msgs/msg/ManualControlMode
    • autoware_adapi_v1_msgs/msg/ResponseStatus
    "},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/srv/SelectManualControlMode/#this-type-is-used-by","title":"This type is used by","text":"
    • None
    "},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/srv/SetCooperationCommands/","title":"autoware_adapi_v1_msgs/srv/SetCooperationCommands","text":""},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/srv/SetCooperationCommands/#autoware_adapi_v1_msgssrvsetcooperationcommands","title":"autoware_adapi_v1_msgs/srv/SetCooperationCommands","text":""},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/srv/SetCooperationCommands/#definition","title":"Definition","text":"
    autoware_adapi_v1_msgs/CooperationCommand[] commands\n---\nautoware_adapi_v1_msgs/ResponseStatus status\n
    "},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/srv/SetCooperationCommands/#this-type-uses","title":"This type uses","text":"
    • autoware_adapi_v1_msgs/msg/CooperationCommand
    • autoware_adapi_v1_msgs/msg/ResponseStatus
    "},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/srv/SetCooperationCommands/#this-type-is-used-by","title":"This type is used by","text":"
    • None
    "},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/srv/SetCooperationPolicies/","title":"autoware_adapi_v1_msgs/srv/SetCooperationPolicies","text":""},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/srv/SetCooperationPolicies/#autoware_adapi_v1_msgssrvsetcooperationpolicies","title":"autoware_adapi_v1_msgs/srv/SetCooperationPolicies","text":""},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/srv/SetCooperationPolicies/#definition","title":"Definition","text":"
    autoware_adapi_v1_msgs/CooperationPolicy[] policies\n---\nautoware_adapi_v1_msgs/ResponseStatus status\n
    "},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/srv/SetCooperationPolicies/#this-type-uses","title":"This type uses","text":"
    • autoware_adapi_v1_msgs/msg/CooperationPolicy
    • autoware_adapi_v1_msgs/msg/ResponseStatus
    "},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/srv/SetCooperationPolicies/#this-type-is-used-by","title":"This type is used by","text":"
    • None
    "},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/srv/SetDoorCommand/","title":"autoware_adapi_v1_msgs/srv/SetDoorCommand","text":""},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/srv/SetDoorCommand/#autoware_adapi_v1_msgssrvsetdoorcommand","title":"autoware_adapi_v1_msgs/srv/SetDoorCommand","text":""},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/srv/SetDoorCommand/#definition","title":"Definition","text":"
    autoware_adapi_v1_msgs/DoorCommand[] doors\n---\nautoware_adapi_v1_msgs/ResponseStatus status\n
    "},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/srv/SetDoorCommand/#this-type-uses","title":"This type uses","text":"
    • autoware_adapi_v1_msgs/msg/DoorCommand
    • autoware_adapi_v1_msgs/msg/ResponseStatus
    "},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/srv/SetDoorCommand/#this-type-is-used-by","title":"This type is used by","text":"
    • None
    "},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/srv/SetRoute/","title":"autoware_adapi_v1_msgs/srv/SetRoute","text":""},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/srv/SetRoute/#autoware_adapi_v1_msgssrvsetroute","title":"autoware_adapi_v1_msgs/srv/SetRoute","text":""},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/srv/SetRoute/#definition","title":"Definition","text":"
    std_msgs/Header header\nautoware_adapi_v1_msgs/RouteOption option\ngeometry_msgs/Pose goal\nautoware_adapi_v1_msgs/RouteSegment[] segments\n---\nuint16 ERROR_ROUTE_EXISTS = 1 # Deprecated. Use ERROR_INVALID_STATE.\nuint16 ERROR_INVALID_STATE = 1\nuint16 ERROR_PLANNER_UNREADY = 2\nuint16 ERROR_PLANNER_FAILED = 3\nuint16 ERROR_REROUTE_FAILED = 4\nautoware_adapi_v1_msgs/ResponseStatus status\n
    "},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/srv/SetRoute/#this-type-uses","title":"This type uses","text":"
    • autoware_adapi_v1_msgs/msg/ResponseStatus
    • autoware_adapi_v1_msgs/msg/RouteOption
    • autoware_adapi_v1_msgs/msg/RouteSegment
    "},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/srv/SetRoute/#this-type-is-used-by","title":"This type is used by","text":"
    • None
    "},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/srv/SetRoutePoints/","title":"autoware_adapi_v1_msgs/srv/SetRoutePoints","text":""},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/srv/SetRoutePoints/#autoware_adapi_v1_msgssrvsetroutepoints","title":"autoware_adapi_v1_msgs/srv/SetRoutePoints","text":""},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/srv/SetRoutePoints/#definition","title":"Definition","text":"
    std_msgs/Header header\nautoware_adapi_v1_msgs/RouteOption option\ngeometry_msgs/Pose goal\ngeometry_msgs/Pose[] waypoints\n---\nuint16 ERROR_ROUTE_EXISTS = 1 # Deprecated. Use ERROR_INVALID_STATE.\nuint16 ERROR_INVALID_STATE = 1\nuint16 ERROR_PLANNER_UNREADY = 2\nuint16 ERROR_PLANNER_FAILED = 3\nuint16 ERROR_REROUTE_FAILED = 4\nautoware_adapi_v1_msgs/ResponseStatus status\n
    "},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/srv/SetRoutePoints/#this-type-uses","title":"This type uses","text":"
    • autoware_adapi_v1_msgs/msg/ResponseStatus
    • autoware_adapi_v1_msgs/msg/RouteOption
    "},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/srv/SetRoutePoints/#this-type-is-used-by","title":"This type is used by","text":"
    • None
    "},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_version_msgs/srv/InterfaceVersion/","title":"autoware_adapi_version_msgs/srv/InterfaceVersion","text":""},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_version_msgs/srv/InterfaceVersion/#autoware_adapi_version_msgssrvinterfaceversion","title":"autoware_adapi_version_msgs/srv/InterfaceVersion","text":""},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_version_msgs/srv/InterfaceVersion/#definition","title":"Definition","text":"
    ---\nuint16 major\nuint16 minor\nuint16 patch\n
    "},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_version_msgs/srv/InterfaceVersion/#this-type-uses","title":"This type uses","text":"
    • None
    "},{"location":"design/autoware-interfaces/ad-api/types/autoware_adapi_version_msgs/srv/InterfaceVersion/#this-type-is-used-by","title":"This type is used by","text":"
    • None
    "},{"location":"design/autoware-interfaces/ad-api/use-cases/change-operation-mode/","title":"Change the operation mode","text":""},{"location":"design/autoware-interfaces/ad-api/use-cases/change-operation-mode/#change-the-operation-mode","title":"Change the operation mode","text":""},{"location":"design/autoware-interfaces/ad-api/use-cases/change-operation-mode/#related-api","title":"Related API","text":"
    • Operation mode
    "},{"location":"design/autoware-interfaces/ad-api/use-cases/change-operation-mode/#sequence","title":"Sequence","text":"
    • Change the mode with software switch.

    • Change the mode with hardware switch.

    "},{"location":"design/autoware-interfaces/ad-api/use-cases/drive-designated-position/","title":"Drive to the designated position","text":""},{"location":"design/autoware-interfaces/ad-api/use-cases/drive-designated-position/#drive-to-the-designated-position","title":"Drive to the designated position","text":""},{"location":"design/autoware-interfaces/ad-api/use-cases/drive-designated-position/#related-api","title":"Related API","text":"
    • Operation mode
    • Routing
    "},{"location":"design/autoware-interfaces/ad-api/use-cases/drive-designated-position/#sequence","title":"Sequence","text":""},{"location":"design/autoware-interfaces/ad-api/use-cases/get-on-off/","title":"Get on and get off","text":""},{"location":"design/autoware-interfaces/ad-api/use-cases/get-on-off/#get-on-and-get-off","title":"Get on and get off","text":""},{"location":"design/autoware-interfaces/ad-api/use-cases/get-on-off/#related-api","title":"Related API","text":"
    • Vehicle doors
    "},{"location":"design/autoware-interfaces/ad-api/use-cases/get-on-off/#sequence","title":"Sequence","text":""},{"location":"design/autoware-interfaces/ad-api/use-cases/initialize-pose/","title":"Initialize the pose","text":""},{"location":"design/autoware-interfaces/ad-api/use-cases/initialize-pose/#initialize-the-pose","title":"Initialize the pose","text":""},{"location":"design/autoware-interfaces/ad-api/use-cases/initialize-pose/#related-api","title":"Related API","text":"
    • Localization
    "},{"location":"design/autoware-interfaces/ad-api/use-cases/initialize-pose/#sequence","title":"Sequence","text":"
    • Initialization of the pose using input.

    • Initialization of the pose using GNSS.

    "},{"location":"design/autoware-interfaces/ad-api/use-cases/launch-terminate/","title":"Launch and terminate","text":""},{"location":"design/autoware-interfaces/ad-api/use-cases/launch-terminate/#launch-and-terminate","title":"Launch and terminate","text":""},{"location":"design/autoware-interfaces/ad-api/use-cases/launch-terminate/#related-api","title":"Related API","text":"
    • T.B.D.
    "},{"location":"design/autoware-interfaces/ad-api/use-cases/launch-terminate/#sequence","title":"Sequence","text":""},{"location":"design/autoware-interfaces/ad-api/use-cases/system-monitoring/","title":"System monitoring","text":""},{"location":"design/autoware-interfaces/ad-api/use-cases/system-monitoring/#system-monitoring","title":"System monitoring","text":""},{"location":"design/autoware-interfaces/ad-api/use-cases/system-monitoring/#heartbeat","title":"Heartbeat","text":"

    Heartbeat is a reference for checking whether communication with Autoware is being performed properly.

    "},{"location":"design/autoware-interfaces/ad-api/use-cases/system-monitoring/#diagnostics","title":"Diagnostics","text":"

    Diagnostics is a set of error levels for each functional unit of Autoware. The design and structure of functional units are system dependent. This is useful to identify the cause of an abnormality.

    "},{"location":"design/autoware-interfaces/ad-api/use-cases/vehicle-monitoring/","title":"Vehicle monitoring","text":""},{"location":"design/autoware-interfaces/ad-api/use-cases/vehicle-monitoring/#vehicle-monitoring","title":"Vehicle monitoring","text":"

    AD API provides current vehicle status for remote monitoring, visualization for passengers, etc. Use the API below depending on the data you want to monitor.

    "},{"location":"design/autoware-interfaces/ad-api/use-cases/vehicle-monitoring/#vehicle-status","title":"Vehicle status","text":"

    The vehicle status provides basic information such as kinematics, indicators, and dimensions. This allows a remote operator to know the position and velocity of the vehicle. For applications such as FMS, it can help find vehicles that need assistance, such as vehicles that are stuck or brake suddenly. It is also possible to determine the actual distance to an object from the vehicle dimensions.

    "},{"location":"design/autoware-interfaces/ad-api/use-cases/vehicle-monitoring/#planning-factors","title":"Planning factors","text":"

    The planning factors provides the planning status of the vehicle. HMI can use this to warn of sudden movements of the vehicle, and to share the stop reason with passengers for comfortable driving.

    "},{"location":"design/autoware-interfaces/ad-api/use-cases/vehicle-monitoring/#detected-objects","title":"Detected objects","text":"

    The perception provides the objects detected by Autoware. HMI can use this to visualize objects around the vehicle.

    "},{"location":"design/autoware-interfaces/ad-api/use-cases/vehicle-operation/","title":"Vehicle operation","text":""},{"location":"design/autoware-interfaces/ad-api/use-cases/vehicle-operation/#vehicle-operation","title":"Vehicle operation","text":""},{"location":"design/autoware-interfaces/ad-api/use-cases/vehicle-operation/#request-to-intervene","title":"Request to intervene","text":"

    Request to intervene (RTI) is a feature that requires the operator to switch to manual driving mode. It is also called Take Over Request (TOR). Interfaces for RTI are currently being discussed. For now assume that manual driving is requested if the MRM state is not NORMAL. See fail-safe for details.

    "},{"location":"design/autoware-interfaces/ad-api/use-cases/vehicle-operation/#request-to-cooperate","title":"Request to cooperate","text":"

    Request to cooperate (RTC) is a feature that the operator supports the decision in autonomous driving mode. Autoware usually drives the vehicle using its own decisions, but the operator may prefer to make their own decisions in complex situations. Since RTC only overrides the decision and does not need to change operation mode, the vehicle can continue autonomous driving, unlike RTC. See cooperation for details.

    "},{"location":"design/autoware-interfaces/ad-api/use-cases/manual-control/","title":"Manual control","text":""},{"location":"design/autoware-interfaces/ad-api/use-cases/manual-control/#manual-control","title":"Manual control","text":""},{"location":"design/autoware-interfaces/ad-api/use-cases/manual-control/#description","title":"Description","text":"

    To operate a vehicle without a steering wheel or steering wheel, or to develop a remote operation system, an interface to manually drive a vehicle using Autoware is required.

    "},{"location":"design/autoware-interfaces/ad-api/use-cases/manual-control/#requirements","title":"Requirements","text":"
    • The vehicle can transition to a safe state when a communication problem occurs.
    • Supports the following commands and statuses.
      • Pedal or acceleration as longitudinal control
      • Steering tire angle as lateral control
      • Gear
      • Turn indicators
      • Hazard lights
    • The following are under consideration.
      • Headlights
      • Wipers
      • Parking brake
      • Horn
    "},{"location":"design/autoware-interfaces/ad-api/use-cases/manual-control/#sequence","title":"Sequence","text":""},{"location":"design/autoware-interfaces/ad-api/use-cases/manual-control/#related-features","title":"Related features","text":"
    • Manual control
    • Vehicle status
    "},{"location":"design/autoware-interfaces/components/","title":"Component interfaces","text":""},{"location":"design/autoware-interfaces/components/#component-interfaces","title":"Component interfaces","text":"

    Warning

    Under Construction

    See here for an overview.

    "},{"location":"design/autoware-interfaces/components/control/","title":"Control","text":""},{"location":"design/autoware-interfaces/components/control/#control","title":"Control","text":""},{"location":"design/autoware-interfaces/components/control/#inputs","title":"Inputs","text":""},{"location":"design/autoware-interfaces/components/control/#vehicle-kinematic-state","title":"Vehicle kinematic state","text":"

    Current position and orientation of ego. Published by the Localization module.

    • nav_msgs/Odometry
      • std_msgs/Header header
      • string child_frame_id
      • geometry_msgs/PoseWithCovariance pose
      • geometry_msgs/TwistWithCovariance twist
    "},{"location":"design/autoware-interfaces/components/control/#trajectory","title":"Trajectory","text":"

    trajectory to be followed by the controller. See Outputs of Planning.

    "},{"location":"design/autoware-interfaces/components/control/#steering-status","title":"Steering Status","text":"

    Current steering of the ego vehicle. Published by the Vehicle Interface.

    • Steering message (github discussion).
      • builtin_interfaces::msg::Time stamp
      • float32 steering_angle
    "},{"location":"design/autoware-interfaces/components/control/#actuation-status","title":"Actuation Status","text":"

    Actuation status of the ego vehicle for acceleration, steering, and brake.

    TODO This represents the reported physical efforts exerted by the vehicle actuators. Published by the Vehicle Interface.

    • ActuationStatus (github discussion).
      • builtin_interfaces::msg::Time stamp
      • float32 acceleration
      • float32 steering
    "},{"location":"design/autoware-interfaces/components/control/#output","title":"Output","text":""},{"location":"design/autoware-interfaces/components/control/#vehicle-control-command","title":"Vehicle Control Command","text":"

    A motion signal to drive the vehicle, achieved by the low-level controller in the vehicle layer. Used by the Vehicle Interface.

    • autoware_control_msgs/Control
      • builtin_interfaces::msg::Time stamp
      • autoware_control_msgs/Lateral lateral
        • builtin_interfaces::msg::Time stamp
        • float steering_tire_angle
        • float steering_tire_rotation_rate
      • autoware_control_msgs/Longitudinal longitudinal
        • builtin_interfaces::msg::Time stamp
        • builtin_interfaces::msg::Duration duration
        • builtin_interfaces::msg::Duration time_step
        • float[] speeds
        • float[] accelerations
        • float[] jerks
    "},{"location":"design/autoware-interfaces/components/localization/","title":"Localization","text":""},{"location":"design/autoware-interfaces/components/localization/#localization","title":"Localization","text":""},{"location":"design/autoware-interfaces/components/localization/#inputs","title":"Inputs","text":""},{"location":"design/autoware-interfaces/components/localization/#pointcloud-map","title":"Pointcloud Map","text":"

    Environment map created with point cloud, published by the map server.

    • sensor_msgs/msg/PointCloud2

    A 3d point cloud map is used for LiDAR-based localization in Autoware.

    "},{"location":"design/autoware-interfaces/components/localization/#manual-initial-pose","title":"Manual Initial Pose","text":"

    Start pose of ego, published by the user interface.

    • geometry_msgs/msg/PoseWithCovarianceStamped
      • std_msgs/msg/Header header
      • geometry_msgs/msg/PoseWithCovariance pose
        • geometry_msgs/msg/Pose pose
          • geometry_msgs/msg/Point position
          • geometry_msg/msg/Quaternion orientation
        • double[36] covariance
    "},{"location":"design/autoware-interfaces/components/localization/#3d-lidar-scanning","title":"3D-LiDAR Scanning","text":"

    LiDAR scanning for NDT matching, published by the LiDAR sensor.

    • sensor_msgs/msg/PointCloud2

    The raw 3D-LiDAR data needs to be processed by the point cloud pre-processing modules before being used for localization.

    "},{"location":"design/autoware-interfaces/components/localization/#automatic-initial-pose","title":"Automatic Initial pose","text":"

    Start pose of ego, calculated from INS(Inertial navigation sensor) sensing data.

    • geometry_msgs/msg/PoseWithCovarianceStamped
      • std_msgs/msg/Header header
      • geometry_msgs/msg/PoseWithCovariance pose
        • geometry_msgs/msg/Pose pose
          • geometry_msgs/msg/Point position
          • geometry_msg/msg/Quaternion orientation
        • double[36] covariance

    When the initial pose is not set manually, the message can be used for automatic pose initialization.

    Current Geographic coordinate of the ego, published by the GNSS sensor.

    • sensor_msgs/msg/NavSatFix
      • std_msgs/msg/Header header
      • sensor_msgs/msg/NavSatStatus status
      • double latitude
      • double longitude
      • double altitude
      • double[9] position_covariance
      • unit8 position_covariance_type

    Current orientation of the ego, published by the GNSS-INS.

    • autoware_sensing_msgs/msg/GnssInsOrientationStamped
      • std_msgs/Header header
      • autoware_sensing_msgs/msg/GnssInsOrientation orientation
        • geometry_msgs/Quaternion orientation
        • float32 rmse_rotation_x
        • float32 rmse_rotation_y
        • float32 rmse_rotation_z
    "},{"location":"design/autoware-interfaces/components/localization/#imu-data","title":"IMU Data","text":"

    Current orientation, angular velocity and linear acceleration of ego, calculated from IMU sensing data.

    • sensor_msgs/msg/Imu
      • std_msgs/msg/Header header
      • geometry_msgs/msg/Quaternion orientation
      • double[9] orientation_covariance
      • geometry_msgs/msg/Vector3 angular_velocity
      • double[9] angular_velocity_covariance
      • geometry_msgs/msg/Vector3 linear_acceleration
      • double[9] linear_acceleration_covariance
    "},{"location":"design/autoware-interfaces/components/localization/#vehicle-velocity-status","title":"Vehicle Velocity Status","text":"

    Current velocity of the ego vehicle, published by the vehicle interface.

    • autoware_vehicle_msgs/msg/VelocityReport
      • std_msgs/msg/Header header;
      • float longitudinal_velocity;
      • float lateral_velocity;
      • float heading_rate;

    Before the velocity input localization interface, module autoware_vehicle_velocity_converter converts message type autoware_vehicle_msgs/msg/VelocityReport to geometry_msgs/msg/TwistWithCovarianceStamped.

    "},{"location":"design/autoware-interfaces/components/localization/#outputs","title":"Outputs","text":""},{"location":"design/autoware-interfaces/components/localization/#vehicle-pose","title":"Vehicle pose","text":"

    Current pose of ego, calculated from localization interface.

    • geometry_msgs/msg/PoseWithCovarianceStamped
      • std_msgs/msg/Header header
      • geometry_msg/PoseWithCovariance pose
        • geometry_msgs/msg/Pose pose
          • geometry_msgs/msg/Point position
          • geometry_msgs/msg/Quaternion orientation
        • double[36] covariance
    "},{"location":"design/autoware-interfaces/components/localization/#vehicle-velocity","title":"Vehicle velocity","text":"

    Current velocity of ego, calculated from localization interface.

    • geometry_msgs/msg/TwistWithCovarianceStamped
      • std_msgs/msg/Header header
      • geometry_msg/TwistWithCovariance twist
        • geometry_msgs/msg/Twist twist
          • geometry_msgs/msg/Vector3 linear
          • geometry_msgs/msg/Vector3 angular
        • double[36] covariance
    "},{"location":"design/autoware-interfaces/components/localization/#vehicle-acceleration","title":"Vehicle acceleration","text":"

    Current acceleration of ego, calculated from localization interface.

    • geometry_msgs/msg/AccelWithCovarianceStamped
      • std_msgs/msg/Header header
      • geometry_msg/AccelWithCovariance accel
        • geometry_msgs/msg/Accel accel
          • geometry_msgs/msg/Vector3 linear
          • geometry_msgs/msg/Vector3 angular
        • double[36] covariance
    "},{"location":"design/autoware-interfaces/components/localization/#vehicle-kinematic-state","title":"Vehicle kinematic state","text":"

    Current pose, velocity and acceleration of ego, calculated from localization interface.

    Note: Kinematic state contains pose, velocity and acceleration. In the future, pose, velocity and acceleration will not be used as output for localization.

    • autoware_msgs/autoware_localization_msgs/msg/KinematicState
      • std_msgs/msg/Header header
      • string child_frame_id
      • geometry_msgs/PoseWithCovariance pose_with_covariance
      • geometry_msgs/TwistWithCovariance twist_with_covariance
      • geometry_msgs/AccelWithCovariance accel_with_covariance

    The message will be subscribed by the planning and control module.

    "},{"location":"design/autoware-interfaces/components/localization/#localization-accuracy","title":"Localization Accuracy","text":"

    Diagnostics information that indicates if the localization module works properly.

    TBD.

    "},{"location":"design/autoware-interfaces/components/map/","title":"Map","text":""},{"location":"design/autoware-interfaces/components/map/#map","title":"Map","text":""},{"location":"design/autoware-interfaces/components/map/#overview","title":"Overview","text":"

    Autoware relies on high-definition point cloud maps and vector maps of the driving environment to perform various tasks. Before launching Autoware, you need to load the pre-created map files.

    "},{"location":"design/autoware-interfaces/components/map/#inputs","title":"Inputs","text":"
    • Point cloud maps (.pcd)
    • Lanelet2 maps (.osm)

    Refer to Creating maps on how to create maps.

    "},{"location":"design/autoware-interfaces/components/map/#outputs","title":"Outputs","text":""},{"location":"design/autoware-interfaces/components/map/#point-cloud-map","title":"Point cloud map","text":"

    It loads point cloud files and publishes the maps to the other Autoware nodes in various configurations. Currently, it supports the following types:

    • Raw point cloud map (sensor_msgs/msg/PointCloud2)
    • Downsampled point cloud map (sensor_msgs/msg/PointCloud2)
    • Partial point cloud map loading via ROS service (autoware_map_msgs/srv/GetPartialPointCloudMap)
    • Differential point cloud map loading via ROS service (autoware_map_msgs/srv/GetDifferentialPointCloudMap)
    "},{"location":"design/autoware-interfaces/components/map/#lanelet2-map","title":"Lanelet2 map","text":"

    It loads a Lanelet2 file and publishes the map data as autoware_map_msgs/msg/LaneletMapBin message. The lan/lon coordinates are projected onto the MGRS coordinates.

    • autoware_map_msgs/msg/LaneletMapBin
      • std_msgs/Header header
      • string version_map_format
      • string version_map
      • string name_map
      • uint8[] data
    "},{"location":"design/autoware-interfaces/components/map/#lanelet2-map-visualization","title":"Lanelet2 map visualization","text":"

    Visualize autoware_map_msgs/msg/LaneletMapBin messages in Rviz.

    • visualization_msgs/msg/MarkerArray
    "},{"location":"design/autoware-interfaces/components/perception-interface/","title":"Perception","text":""},{"location":"design/autoware-interfaces/components/perception-interface/#perception","title":"Perception","text":"
    graph TD\n    cmp_sen(\"Sensing\"):::cls_sen\n    cmp_loc(\"Localization\"):::cls_loc\n    cmp_per(\"Perception\"):::cls_per\n    cmp_plan(\"Planning\"):::cls_plan\n\n    msg_img(\"<font size=2><b>Camera Image</b></font size>\n    <font size=1>sensor_msgs/Image</font size>\"):::cls_sen\n\n    msg_ldr(\"<font size=2><b>Lidar Point Cloud</b></font size>\n    <font size=1>sensor_msgs/PointCloud2</font size>\"):::cls_sen\n\n    msg_lanenet(\"<font size=2><b>Lanelet2 Map</b></font size>\n    <font size=1>autoware_map_msgs/LaneletMapBin</font size>\"):::cls_loc\n\n    msg_vks(\"<font size=2><b>Vehicle Kinematic State</b></font size>\n    <font size=1>nav_msgs/Odometry</font size>\"):::cls_loc\n\n    msg_obj(\"<font size=2><b>3D Object Predictions </b></font size>\n    <font size=1>autoware_perception_msgs/PredictedObjects</font size>\"):::cls_per\n\n    msg_tl(\"<font size=2><b>Traffic Light Response </b></font size>\n    <font size=1>autoware_perception_msgs/TrafficSignalArray</font size>\"):::cls_per\n\n    msg_tq(\"<font size=2><b>Traffic Light Query </b></font size>\n    <font size=1>TBD</font size>\"):::cls_plan\n\n\n    cmp_sen --> msg_img --> cmp_per\n    cmp_sen --> msg_ldr --> cmp_per\n    cmp_per --> msg_obj --> cmp_plan\n    cmp_per --> msg_tl --> cmp_plan\n    cmp_plan --> msg_tq -->cmp_per\n\n    cmp_loc --> msg_vks --> cmp_per\n    cmp_loc --> msg_lanenet --> cmp_per\n\nclassDef cmp_sen fill:#F8CECC,stroke:#999,stroke-width:1px;\nclassDef cls_loc fill:#D5E8D4,stroke:#999,stroke-width:1px;\nclassDef cls_per fill:#FFF2CC,stroke:#999,stroke-width:1px;\nclassDef cls_plan fill:#5AB8FF,stroke:#999,stroke-width:1px;
    "},{"location":"design/autoware-interfaces/components/perception-interface/#inputs","title":"Inputs","text":""},{"location":"design/autoware-interfaces/components/perception-interface/#pointcloud","title":"PointCloud","text":"

    PointCloud data published by Lidar.

    • sensor_msgs/msg/PointCloud2
    "},{"location":"design/autoware-interfaces/components/perception-interface/#image","title":"Image","text":"

    Image frame captured by camera.

    • sensor_msgs/msg/Image
    "},{"location":"design/autoware-interfaces/components/perception-interface/#vehicle-kinematic-state","title":"Vehicle kinematic state","text":"

    current position of ego, used in traffic signals recognition. See output of Localization.

    "},{"location":"design/autoware-interfaces/components/perception-interface/#lanelet2-map","title":"Lanelet2 Map","text":"

    map of the environment. See outputs of Map.

    "},{"location":"design/autoware-interfaces/components/perception-interface/#output","title":"Output","text":""},{"location":"design/autoware-interfaces/components/perception-interface/#3d-object-predictions","title":"3D Object Predictions","text":"

    3D Objects detected, tracked and predicted by sensor fusing.

    • autoware_perception_msgs/msg/PredictedObjects
      • std_msgs/Header header
      • sequence<autoware_perception_msgs::msg::PredictedObject> objects
        • unique_identifier_msgs::msg::UUID uuid
        • float existence_probability
        • sequence<autoware_perception_msgs::msg::ObjectClassification> classification
          • uint8 classification
          • float probability
        • autoware_perception_msgs::msg::PredictedObjectKinematics kinematics
          • geometry_msgs::msg::PoseWithCovariance initial_pose
          • geometry_msgs::msg::TwistWithCovariance
          • geometry_msgs::msg::AccelWithCovariance initial_acceleration
        • sequence<autoware_perception_msgs::msg::PredictedPath, 10> predicted_paths
          • sequence<geometry_msgs::msg::Pose, 100> path
          • builtin_interfaces::msg::Duration time_step
          • float confidence
        • sequence<autoware_perception_msgs::msg::Shape, 5> shape
          • geometry_msgs::msg::Polygon polygon
          • float height
    "},{"location":"design/autoware-interfaces/components/perception-interface/#traffic-signals","title":"Traffic Signals","text":"

    traffic signals recognized by object detection model.

    • autoware_perception_msgs::msg::TrafficSignalArray
      • autoware_perception_msgs::msg::TrafficSignal signals
        • autoware_perception_msgs::msg::TrafficSignalElement elements
          • uint8 UNKNOWN = 0
          • uint8 Red = 1
          • uint8 AMBER = 2
          • uint8 WHITE = 4
          • uint8 CIRCLE = 1
          • uint8 LEFT_ARROW = 2
          • uint8 RIGHT_ARROW = 3
          • uint8 UP_ARROW = 4
          • uint8 UP_LEFT_ARROW=5
          • uint8 UP_RIGHT_ARROW=6
          • uint8 DOWN_ARROW = 7
          • uint8 DOWN_LEFT_ARROW = 8
          • uint8 DOWN_RIGHT_ARROW = 9
          • uint8 CROSS = 10
          • uint8 SOLID_OFF = 1
          • uint8 SOLID_ON = 2
          • uint8 FLASHING = 3
          • uint8 color
          • uint8 shape
          • uint8 status
          • float32 confidence
        • int64 traffic_signal_id
      • builtin_interfaces::msg::Time stamp
    "},{"location":"design/autoware-interfaces/components/perception/","title":"Perception","text":""},{"location":"design/autoware-interfaces/components/perception/#perception","title":"Perception","text":"

    This page provides specific specifications about the Interface of the Perception Component. Please refer to the perception architecture reference implementation design for concepts and data flow.

    "},{"location":"design/autoware-interfaces/components/perception/#input","title":"Input","text":""},{"location":"design/autoware-interfaces/components/perception/#from-map-component","title":"From Map Component","text":"Name Topic / Service Type Description Vector Map /map/vector_map autoware_map_msgs/msg/LaneletMapBin HD Map including the information about lanes Point Cloud Map /service/get_differential_pcd_map autoware_map_msgs/srv/GetDifferentialPointCloudMap Point Cloud Map

    Notes:

    • Point Cloud Map
      • input can be both topic or service, but we highly recommend to use service because since this interface enables processing without being constrained by map file size limits.
    "},{"location":"design/autoware-interfaces/components/perception/#from-sensing-component","title":"From Sensing Component","text":"Name Topic Type Description Camera Image /sensing/camera/camera*/image_rect_color sensor_msgs/Image Camera image data, processed with Lens Distortion Correction (LDC) Camera Image /sensing/camera/camera*/image_raw sensor_msgs/Image Camera image data, not processed with Lens Distortion Correction (LDC) Point Cloud /sensing/lidar/concatenated/pointcloud sensor_msgs/PointCloud2 Concatenated point cloud from multiple LiDAR sources Radar Object /sensing/radar/detected_objects autoware_perception_msgs/msg/DetectedObject Radar objects"},{"location":"design/autoware-interfaces/components/perception/#from-localization-component","title":"From Localization Component","text":"Name Topic Type Description Vehicle Odometry /localization/kinematic_state nav_msgs/msg/Odometry Ego vehicle odometry topic"},{"location":"design/autoware-interfaces/components/perception/#from-api","title":"From API","text":"Name Topic Type Description External Traffic Signals /external/traffic_signals autoware_perception_msgs::msg::TrafficSignalArray The traffic signals from an external system"},{"location":"design/autoware-interfaces/components/perception/#output","title":"Output","text":""},{"location":"design/autoware-interfaces/components/perception/#to-planning","title":"To Planning","text":"

    Please refer to the perception component design for detailed definitions of each output.\"

    Name Topic Type Description Dynamic Objects /perception/object_recognition/objects autoware_perception_msgs/msg/PredictedObjects Set of dynamic objects with information such as a object class and a shape of the objects. These objects did not exist when the map was generated and are not contained within the map. Obstacles /perception/obstacle_segmentation/pointcloud sensor_msgs/PointCloud2 Obstacles, including both dynamic objects and static obstacles that requires the ego vehicle either steer clear of them or come to a stop in front of the obstacles. Occupancy Grid Map /perception/occupancy_grid_map/map nav_msgs/msg/OccupancyGrid The map with the information about the presence of obstacles and blind spot Traffic Signal /perception/traffic_light_recognition/traffic_signals autoware_perception_msgs::msg::TrafficLightGroupArray The traffic signal information such as a color (green, yellow, read) and an arrow (right, left, straight)"},{"location":"design/autoware-interfaces/components/planning/","title":"Planning","text":""},{"location":"design/autoware-interfaces/components/planning/#planning","title":"Planning","text":"

    This page provides specific specifications about the Interface of the Planning Component. Please refer to the planning architecture design document for high-level concepts and data flow.

    TODO: The detailed definitions (meanings of elements included in each topic) are not described yet, need to be updated.

    "},{"location":"design/autoware-interfaces/components/planning/#input","title":"Input","text":""},{"location":"design/autoware-interfaces/components/planning/#from-map-component","title":"From Map Component","text":"Name Topic Type Description Vector Map /map/vector_map autoware_map_msgs/msg/LaneletMapBin Map of the environment where the planning takes place."},{"location":"design/autoware-interfaces/components/planning/#from-localization-component","title":"From Localization Component","text":"Name Topic Type Description Vehicle Kinematic State /localization/kinematic_state nav_msgs/msg/Odometry Current position, orientation and velocity of ego. Vehicle Acceleration /localization/acceleration geometry_msgs/msg/AccelWithCovarianceStamped Current acceleration of ego.

    TODO: acceleration information should be merged into the kinematic state.

    "},{"location":"design/autoware-interfaces/components/planning/#from-perception-component","title":"From Perception Component","text":"Name Topic Type Description Objects /perception/object_recognition/objects autoware_perception_msgs/msg/PredictedObjects Set of perceived objects around ego that need to be avoided or followed when planning a trajectory. This contains semantics information such as a object class (e.g. vehicle, pedestrian, etc) or a shape of the objects. Obstacles /perception/obstacle_segmentation/pointcloud sensor_msgs/msg/PointCloud2 Set of perceived obstacles around ego that need to be avoided or followed when planning a trajectory. This only contains a primitive information of the obstacle. No shape nor velocity information. Occupancy Grid Map /perception/occupancy_grid_map/map nav_msgs/msg/OccupancyGrid Contains the presence of obstacles and blind spot information (represented as UNKNOWN). Traffic Signal /perception/traffic_light_recognition/traffic_signals autoware_perception_msgs::msg::TrafficLightGroupArray Contains the traffic signal information such as a color (green, yellow, read) and an arrow (right, left, straight).

    TODO: The type of the Obstacles information should not depend on the specific sensor message type (now PointCloud). It needs to be fixed.

    "},{"location":"design/autoware-interfaces/components/planning/#from-api","title":"From API","text":"Name Topic Type Description Max Velocity /planning/scenario_planning/max_velocity_default autoware_adapi_v1_msgs/srv/SetRoutePoints Indicate the maximum value of the vehicle speed plan Operation Mode /system/operation_mode/state autoware_adapi_v1_msgs/msg/OperationModeState Indicates the current operation mode (automatic/manual, etc.). Route Set /planning/mission_planning/set_route autoware_adapi_v1_msgs/srv/SetRoute Indicates to set the route when the vehicle is stopped. Route Points Set /planning/mission_planning/set_route_points autoware_adapi_v1_msgs/srv/SetRoutePoints Indicates to set the route with points when the vehicle is stopped. Route Change /planning/mission_planning/change_route autoware_adapi_v1_msgs/srv/SetRoute Indicates to change the route when the vehicle is moving. Route Points Change /planning/mission_planning/change_route_points autoware_adapi_v1_msgs/srv/SetRoutePoints Indicates to change the route with points when the vehicle is moving. Route Clear /planning/mission_planning/clear_route autoware_adapi_v1_msgs/srv/ClearRoute Indicates to clear the route information. MRM Route Set Points /planning/mission_planning/mission_planner/srv/set_mrm_route autoware_adapi_v1_msgs/srv/SetRoutePoints Indicates to set the emergency route. MRM Route Clear /planning/mission_planning/mission_planner/srv/clear_mrm_route autoware_adapi_v1_msgs/srv/SetRoutePoints Indicates to clear the emergency route."},{"location":"design/autoware-interfaces/components/planning/#output","title":"Output","text":""},{"location":"design/autoware-interfaces/components/planning/#to-control","title":"To Control","text":"Name Topic Type Description Trajectory /planning/trajectory autoware_planning_msgs/msg/Trajectory A sequence of space and velocity and acceleration points to be followed by the controller. Turn Indicator /planning/turn_indicators_cmd autoware_vehicle_msgs/msg/TurnIndicatorsCommand Turn indicator signal to be followed by the vehicle. Hazard Light /planning/hazard_lights_cmd autoware_vehicle_msgs/msg/HazardLightsCommand Hazard light signal to be followed by the vehicle."},{"location":"design/autoware-interfaces/components/planning/#to-system","title":"To System","text":"Name Topic Type Description Diagnostics /planning/hazard_lights_cmd diagnostic_msgs/msg/DiagnosticArray Diagnostic status of the Planning component reported to the System component."},{"location":"design/autoware-interfaces/components/planning/#to-api","title":"To API","text":"Name Topic Type Description Path Candidate /planning/path_candidate/* autoware_planning_msgs/msg/Path The path Autoware is about to take. Users can interrupt the operation based on the path candidate information. Steering Factor /planning/steering_factor/* autoware_adapi_v1_msgs/msg/SteeringFactorArray Information about the steering maneuvers performed by Autoware (e.g., steering to the right for a right turn, etc.) Velocity Factor /planning/velocity_factors/* autoware_adapi_v1_msgs/msg/VelocityFactorArray Information about the velocity maneuvers performed by Autoware (e.g., stop for an obstacle, etc.)"},{"location":"design/autoware-interfaces/components/planning/#planning-internal-interface","title":"Planning internal interface","text":"

    This section explains the communication between the different planning modules shown in the Planning Architecture Design.

    "},{"location":"design/autoware-interfaces/components/planning/#from-mission-planning-to-scenario-planning","title":"From Mission Planning to Scenario Planning","text":"Name Topic Type Description Route /planning/mission_planning/route autoware_planning_msgs/msg/LaneletRoute A sequence of lane IDs on a Lanelet map, from the starting point to the destination."},{"location":"design/autoware-interfaces/components/planning/#from-behavior-planning-to-motion-planning","title":"From Behavior Planning to Motion Planning","text":"Name Topic Type Description Path /planning/scenario_planning/lane_driving/behavior_planning/path autoware_planning_msgs/msg/Path A sequence of approximate vehicle positions for driving, along with information on the maximum speed and the drivable areas. Modules receiving this message are expected to make changes to the path within the constraints of the drivable areas and the maximum speed, generating the desired final trajectory."},{"location":"design/autoware-interfaces/components/planning/#from-scenario-planning-to-validation","title":"From Scenario Planning to Validation","text":"Name Topic Type Description Trajectory /planning/scenario_planning/trajectory autoware_planning_msgs/msg/Trajectory A sequence of precise vehicle positions, speeds, and accelerations required for driving. It is expected that the vehicle will follow this trajectory."},{"location":"design/autoware-interfaces/components/sensing/","title":"Sensing","text":""},{"location":"design/autoware-interfaces/components/sensing/#sensing","title":"Sensing","text":"
    graph TD\n    cmp_drv(\"Drivers\"):::cls_drv\n    cmp_loc(\"Localization\"):::cls_loc\n    cmp_per(\"Perception\"):::cls_per\n    cmp_sen(\"Preprocessors\"):::cls_sen\n    msg_ult(\"<font size=2><b>Ultrasonics</b></font size>\n    <font size=1>sensor_msgs/Range</font size>\"):::cls_drv\n    msg_img(\"<font size=2><b>Camera Image</b></font size>\n    <font size=1>sensor_msgs/Image</font size>\"):::cls_drv\n    msg_ldr(\"<font size=2><b>Lidar Point Cloud</b></font size>\n    <font size=1>sensor_msgs/PointCloud2</font size>\"):::cls_drv\n    msg_rdr_t(\"<font size=2><b>Radar Tracks</b></font size>\n    <font size=1>radar_msgs/RadarTracks</font size>\"):::cls_drv\n    msg_rdr_s(\"<font size=2><b>Radar Scan</b></font size>\n    <font size=1>radar_msgs/RadarScan</font size>\"):::cls_drv\n    msg_gnss(\"<font size=2><b>GNSS-INS Position</b></font size>\n    <font size=1>sensor_msgs/NavSatFix</font size>\"):::cls_drv\n    msg_gnssori(\"<font size=2><b>GNSS-INS Orientation</b></font size>\n    <font size=1>autoware_sensing_msgs/GnssInsOrientationStamped</font size>\"):::cls_drv\n    msg_gnssvel(\"<font size=2><b>GNSS Velocity</b></font size>\n    <font size=1>geometry_msgs/TwistWithCovarianceStamped</font size>\"):::cls_drv\n    msg_gnssacc(\"<font size=2><b>GNSS Acceleration</b></font size>\n    <font size=1>geometry_msgs/AccelWithCovarianceStamped</font size>\"):::cls_drv\n    msg_ult_sen(\"<font size=2><b>Ultrasonics</b></font size>\n    <font size=1>sensor_msgs/Range</font size>\"):::cls_sen\n    msg_img_sen(\"<font size=2><b>Camera Image</b></font size>\n    <font size=1>sensor_msgs/Image</font size>\"):::cls_sen\n    msg_pc_combined_rdr(\"<font size=2><b>Combined Radar Tracks</b></font size>\n    <font size=1>radar_msgs/RadarTracks</font size>\"):::cls_sen\n    msg_pc_rdr(\"<font size=2><b>Radar Pointcloud</b></font size>\n    <font size=1>radar_msgs/RadarScan</font size>\"):::cls_sen\n    msg_pc_ldr(\"<font size=2><b>Lidar Point Cloud</b></font size>\n    <font size=1>sensor_msgs/PointCloud2</font size>\"):::cls_sen\n    msg_pose_gnss(\"<font size=2><b>GNSS-INS Pose</b></font size>\n    <font size=1>geometry_msgs/PoseWithCovarianceStamped</font size>\"):::cls_sen\n    msg_gnssori_sen(\"<font size=2><b>GNSS-INS Orientation</b></font size>\n    <font size=1>sensor_msgs/Imu</font size>\"):::cls_sen\n    msg_gnssvel_sen(\"<font size=2><b>GNSS Velocity</b></font size>\n    <font size=1>geometry_msgs/TwistWithCovarianceStamped</font size>\"):::cls_sen\n    msg_gnssacc_sen(\"<font size=2><b>GNSS-INS Acceleration</b></font size>\n    <font size=1>geometry_msgs/AccelWithCovarianceStamped</font size>\"):::cls_sen\n\n    cmp_drv --> msg_ult --> cmp_sen\n    cmp_drv --> msg_img --> cmp_sen\n    cmp_drv --> msg_rdr_t --> cmp_sen\n    cmp_drv --> msg_rdr_s --> cmp_sen\n    cmp_drv --> msg_ldr --> cmp_sen\n    cmp_drv --> msg_gnss --> cmp_sen\n    cmp_drv --> msg_gnssori --> cmp_sen\n    cmp_drv --> msg_gnssvel --> cmp_sen\n    cmp_drv --> msg_gnssacc --> cmp_sen\n\n    cmp_sen --> msg_ult_sen\n    cmp_sen --> msg_img_sen\n    cmp_sen --> msg_gnssori_sen\n    cmp_sen --> msg_gnssvel_sen\n    cmp_sen --> msg_pc_combined_rdr\n    cmp_sen --> msg_pc_rdr\n    cmp_sen --> msg_pc_ldr\n    cmp_sen --> msg_pose_gnss\n    cmp_sen --> msg_gnssacc_sen\n    msg_ult_sen --> cmp_per\n    msg_img_sen --> cmp_per\n    msg_pc_combined_rdr --> cmp_per\n    msg_pc_rdr --> cmp_per\n    msg_pc_ldr --> cmp_per\n    msg_pc_ldr --> cmp_loc\n    msg_pose_gnss --> cmp_loc\n    msg_gnssori_sen --> cmp_loc\n    msg_gnssvel_sen --> cmp_loc\n    msg_gnssacc_sen --> cmp_loc\nclassDef cls_drv fill:#F8CECC,stroke:#999,stroke-width:1px;\nclassDef cls_loc fill:#D5E8D4,stroke:#999,stroke-width:1px;\nclassDef cls_per fill:#FFF2CC,stroke:#999,stroke-width:1px;\nclassDef cls_sen fill:#FFE6CC,stroke:#999,stroke-width:1px;
    "},{"location":"design/autoware-interfaces/components/sensing/#inputs","title":"Inputs","text":"Name Topic Type Description Ultrasonics T.B.D sensor_msgs/Range Distance data from ultrasonic radar driver. Camera Image T.B.D sensor_msgs/Image Image data from camera driver. Radar Tracks T.B.D radar_msgs/RadarTracks Tracks from radar driver. Radar Scan T.B.D radar_msgs/RadarScan Scan from radar driver. Lidar Point Cloud /sensing/lidar/<lidar name>/pointcloud_raw sensor_msgs/PointCloud2 Pointcloud from lidar driver. GNSS-INS Position T.B.D geometry_msgs/NavSatFix Initial pose from GNSS driver. GNSS-INS Orientation T.B.D autoware_sensing_msgs/GnssInsOrientationStamped Initial orientation from GNSS driver. GNSS Velocity T.B.D geometry_msgs/TwistWithCovarianceStamped Initial velocity from GNSS driver. GNSS Acceleration T.B.D geometry_msgs/AccelWithCovarianceStamped Initial acceleration from GNSS driver."},{"location":"design/autoware-interfaces/components/sensing/#output","title":"Output","text":"Name Topic Type Description Ultrasonics T.B.D sensor_msgs/Range Distance data from ultrasonic radar. Used by the Perception. Camera Image T.B.D sensor_msgs/Image Image data from camera. Used by the Perception. Combined Radar Tracks T.B.D radar_msgs/RadarTracks.msg Radar tracks from radar. Used by the Perception. Radar Point Cloud T.B.D radar_msgs/RadarScan.msg Pointcloud from radar. Used by the Perception. Lidar Point Cloud /sensing/lidar/<lidar group>/pointcloud sensor_msgs/PointCloud2 Lidar pointcloud after preprocessing. Used by the Perception and Localization. <lidar group> is a unique name for identifying each LiDAR or the group name when multiple LiDARs are combined. Specifically, the concatenated point cloud of all LiDARs is assigned the <lidar group> name concatenated. GNSS-INS pose T.B.D geometry_msgs/PoseWithCovarianceStamped Initial pose of the ego vehicle from GNSS. Used by the Localization. GNSS-INS Orientation T.B.D sensor_msgs/Imu Orientation info from GNSS. Used by the Localization. GNSS Velocity T.B.D geometry_msgs/TwistWithCovarianceStamped Velocity of the ego vehicle from GNSS. Used by the Localization. GNSS Acceleration T.B.D geometry_msgs/AccelWithCovarianceStamped Acceleration of the ego vehicle from GNSS. Used by the Localization."},{"location":"design/autoware-interfaces/components/vehicle-dimensions/","title":"Vehicle dimensions","text":""},{"location":"design/autoware-interfaces/components/vehicle-dimensions/#vehicle-dimensions","title":"Vehicle dimensions","text":""},{"location":"design/autoware-interfaces/components/vehicle-dimensions/#vehicle-axes-and-base_link","title":"Vehicle axes and base_link","text":"

    The base_link frame is used very frequently throughout the Autoware stack, and is a projection of the rear-axle center onto the ground surface.

    • Localization module outputs the map to base_link transformation.
    • Planning module plans the poses for where the base_link frame should be in the future.
    • Control module tries to fit base_link to incoming poses.
    "},{"location":"design/autoware-interfaces/components/vehicle-dimensions/#vehicle-dimensions_1","title":"Vehicle dimensions","text":""},{"location":"design/autoware-interfaces/components/vehicle-dimensions/#wheelbase","title":"wheelbase","text":"

    The distance between front and rear axles.

    "},{"location":"design/autoware-interfaces/components/vehicle-dimensions/#track_width","title":"track_width","text":"

    The distance between left and right wheels.

    "},{"location":"design/autoware-interfaces/components/vehicle-dimensions/#overhangs","title":"Overhangs","text":"

    Overhangs are part of the minimum safety box calculation.

    When measuring overhangs, side mirrors, protruding sensors and wheels should be taken into consideration.

    "},{"location":"design/autoware-interfaces/components/vehicle-dimensions/#left_overhang","title":"left_overhang","text":"

    The distance between the axis centers of the left wheels and the left-most point of the vehicle.

    "},{"location":"design/autoware-interfaces/components/vehicle-dimensions/#right_overhang","title":"right_overhang","text":"

    The distance between the axis centers of the right wheels and the right-most point of the vehicle.

    "},{"location":"design/autoware-interfaces/components/vehicle-dimensions/#front_overhang","title":"front_overhang","text":"

    The distance between the front axle and the foremost point of the vehicle.

    "},{"location":"design/autoware-interfaces/components/vehicle-dimensions/#rear_overhang","title":"rear_overhang","text":"

    The distance between the rear axle and the rear-most point of the vehicle.

    "},{"location":"design/autoware-interfaces/components/vehicle-dimensions/#vehicle_length","title":"vehicle_length","text":"

    Total length of the vehicle. Calculated by front_overhang + wheelbase + rear_overhang

    "},{"location":"design/autoware-interfaces/components/vehicle-dimensions/#vehicle_width","title":"vehicle_width","text":"

    Total width of the vehicle. Calculated by left_overhang + track_width + right_overhang

    "},{"location":"design/autoware-interfaces/components/vehicle-dimensions/#wheel-parameters","title":"Wheel parameters","text":""},{"location":"design/autoware-interfaces/components/vehicle-dimensions/#wheel_width","title":"wheel_width","text":"

    The lateral width of a wheel tire, primarily used for dead reckoning.

    "},{"location":"design/autoware-interfaces/components/vehicle-dimensions/#wheel_radius","title":"wheel_radius","text":"

    The radius of the wheel, primarily used for dead reckoning.

    "},{"location":"design/autoware-interfaces/components/vehicle-dimensions/#polygon_footprint","title":"polygon_footprint","text":"

    The polygon defines the minimum collision area for the vehicle.

    The points should be ordered clockwise, with the origin on the base_link.

    "},{"location":"design/autoware-interfaces/components/vehicle-dimensions/#wheel-orientations","title":"Wheel orientations","text":"

    If the vehicle is going forward, a positive wheel angle will result in the vehicle turning left.

    Autoware assumes the rear wheels don't turn on z axis.

    "},{"location":"design/autoware-interfaces/components/vehicle-dimensions/#notice","title":"Notice","text":"

    The vehicle used in the illustrations was created by xvlblo22 and is from https://www.turbosquid.com/3d-models/modular-sedan-3d-model-1590886.

    "},{"location":"design/autoware-interfaces/components/vehicle-interface/","title":"Vehicle Interface","text":""},{"location":"design/autoware-interfaces/components/vehicle-interface/#vehicle-interface","title":"Vehicle Interface","text":"

    This page describes the Vehicle Interface Component. Please refer to the Vehicle Interface design document for high-level concepts and data flow.

    The Vehicle Interface Component receives Vehicle commands and publishes Vehicle statuses. It communicates with the vehicle by the vehicle-specific protocol.

    The optional Vehicle command adapter converts generalized control command (target steering, steering rate, velocity, acceleration) into vehicle-specific control values (steering-torque, wheel-torque, voltage, pressure, acceleration pedal position, etc).

    "},{"location":"design/autoware-interfaces/components/vehicle-interface/#communication-with-the-vehicle","title":"Communication with the vehicle","text":"

    The interface to communicate with the vehicle varies between brands and models. For example a vehicle specific message protocol like CAN (Controller Area Network) with a ROS 2 interface (e.g., pacmod). In addition, an Autoware specific interface is often necessary (e.g., pacmod_interface).

    "},{"location":"design/autoware-interfaces/components/vehicle-interface/#vehicle-adapter","title":"Vehicle adapter","text":"

    Autoware's basic control command express the target motion of the vehicle in terms of speed, acceleration, steering angle, and steering rate. This may not be suitable for all vehicles and we thus distinguish between two types of vehicles.

    • Type 1: vehicle that is directly controlled by a subset of speed, acceleration, steering angle, and steering rate.
    • Type 2: vehicle that uses custom commands (motor torque, voltage, pedal pressure, etc).

    For vehicles of type 2, a vehicle adapter is necessary to convert the Autoware control command into the vehicle specific commands. For an example, see the raw_vehicle_cmd_converter which converts the target speed and steering angle to acceleration, steering, and brake mechanical inputs.

    "},{"location":"design/autoware-interfaces/components/vehicle-interface/#inputs-from-autoware","title":"Inputs from Autoware","text":"Name Topic Type Description Control command /control/command/control_cmd autoware_control_msgs/msg/Control Target controls of the vehicle (steering angle, velocity, ...) Control mode command /control/control_mode_request autoware_vehicle_msgs/srv/ControlModeCommand Request to switch between manual and autonomous driving Gear command /control/command/gear_cmd autoware_vehicle_msgs/msg/GearCommand Target gear of the vehicle Hazard lights command /control/command/hazard_lights_cmd autoware_vehicle_msgs/msg/HazardLightsCommand Target values of the hazard lights Turn indicator command /control/command/turn_indicators_cmd autoware_vehicle_msgs/msg/TurnIndicatorsCommand Target values of the turn signals"},{"location":"design/autoware-interfaces/components/vehicle-interface/#outputs-to-autoware","title":"Outputs to Autoware","text":"Name Topic Type Optional ? Description Actuation status /vehicle/status/actuation_status tier4_vehicle_msgs/msg/ActuationStatusStamped Yes (vehicle with mechanical inputs) Current acceleration, brake, and steer values reported by the vehicle Control mode /vehicle/status/control_mode autoware_vehicle_msgs/msg/ControlModeReport Current control mode (manual, autonomous, ...) Door status /vehicle/status/door_status tier4_api_msgs/msg/DoorStatus Yes Current door status Gear report /vehicle/status/gear_status autoware_vehicle_msgs/msg/GearReport Current gear of the vehicle Hazard light status /vehicle/status/hazard_lights_status autoware_vehicle_msgs/msg/HazardLightsReport Current hazard lights status Steering status /vehicle/status/steering_status autoware_vehicle_msgs/msg/SteeringReport Current steering angle of the steering tire Turn indicators status /vehicle/status/turn_indicators_status autoware_vehicle_msgs/msg/TurnIndicatorsReport Current state of the left and right turn indicators Velocity status /vehicle/status/velocity_status autoware_vehicle_msgs/msg/VelocityReport Current velocities of the vehicle (longitudinal, lateral, heading rate)"},{"location":"design/configuration-management/","title":"Configuration management","text":""},{"location":"design/configuration-management/#configuration-management","title":"Configuration management","text":"

    Warning

    Under Construction

    "},{"location":"design/configuration-management/development-process/","title":"Development process","text":""},{"location":"design/configuration-management/development-process/#development-process","title":"Development process","text":"

    Warning

    Under Construction

    "},{"location":"design/configuration-management/release-process/","title":"Release process","text":""},{"location":"design/configuration-management/release-process/#release-process","title":"Release process","text":"

    Warning

    Under Construction

    "},{"location":"design/configuration-management/repository-structure/","title":"Repository structure","text":""},{"location":"design/configuration-management/repository-structure/#repository-structure","title":"Repository structure","text":"

    Warning

    Under Construction

    "},{"location":"how-to-guides/","title":"How-to guides","text":""},{"location":"how-to-guides/#how-to-guides","title":"How-to guides","text":""},{"location":"how-to-guides/#integrating-autoware","title":"Integrating Autoware","text":"
    • Overview
    "},{"location":"how-to-guides/#training-machine-learning-models","title":"Training Machine Learning Models","text":"
    • Training and Deploying Models
    "},{"location":"how-to-guides/#others","title":"Others","text":"
    • Debug Autoware
    • Running Autoware without CUDA
    • Fixing dependent package versions
    • Add a custom ROS message
    • Determining component dependencies
    • Advanced usage of colcon
    • Applying Clang-Tidy to ROS packages
    • Defining temporal performance metrics on components
    • An example procedure for adding and evaluating a new node
    • Lane Detection Methods

    TODO: Write the following contents.

    • Create an Autoware package
    • etc.
    "},{"location":"how-to-guides/integrating-autoware/overview/","title":"Overview","text":""},{"location":"how-to-guides/integrating-autoware/overview/#overview","title":"Overview","text":""},{"location":"how-to-guides/integrating-autoware/overview/#requirement-prepare-your-real-vehicle-hardware","title":"Requirement: prepare your real vehicle hardware","text":"

    Prerequisites for the vehicle:

    • An onboard computer that satisfies the Autoware installation prerequisites
    • The following devices attached
      • Drive-by-wire interface
      • LiDAR
      • Optional: Inertial measurement unit
      • Optional: Camera
      • Optional: GNSS
    "},{"location":"how-to-guides/integrating-autoware/overview/#1-creating-your-autoware-meta-repository","title":"1. Creating your Autoware meta-repository","text":"

    Create your Autoware meta-repository. One easy way is to fork autowarefoundation/autoware and clone it. For how to fork a repository, refer to GitHub Docs.

    git clone https://github.com/YOUR_NAME/autoware.git\n

    If you set up multiple types of vehicles, adding a suffix like \"autoware.vehicle_A\" or \"autoware.vehicle_B\" is recommended.

    "},{"location":"how-to-guides/integrating-autoware/overview/#2-creating-the-your-vehicle-and-sensor-description","title":"2. Creating the your vehicle and sensor description","text":"

    Next, you need to create description packages that define the vehicle and sensor configuration of your vehicle.

    Create the following two packages:

    • YOUR_VEHICLE_launch (see here for example)
    • YOUR_SENSOR_KIT_launch (see here for example)

    Once created, you need to update the autoware.repos file of your cloned Autoware repository to refer to these two description packages.

    -  # sensor_kit\n-  sensor_kit/sample_sensor_kit_launch:\n-    type: git\n-    url: https://github.com/autowarefoundation/sample_sensor_kit_launch.git\n-    version: main\n-  # vehicle\n-  vehicle/sample_vehicle_launch:\n-    type: git\n-    url: https://github.com/autowarefoundation/sample_vehicle_launch.git\n-    version: main\n+  # sensor_kit\n+  sensor_kit/YOUR_SENSOR_KIT_launch:\n+    type: git\n+    url: https://github.com/YOUR_NAME/YOUR_SENSOR_KIT_launch.git\n+    version: main\n+  # vehicle\n+  vehicle/YOUR_VEHICLE_launch:\n+    type: git\n+    url: https://github.com/YOUR_NAME/YOUR_VEHICLE_launch.git\n+    version: main\n
    "},{"location":"how-to-guides/integrating-autoware/overview/#adapt-your_vehicle_launch-for-autoware-launching-system","title":"Adapt YOUR_VEHICLE_launch for autoware launching system","text":""},{"location":"how-to-guides/integrating-autoware/overview/#at-your_vehicle_description","title":"At YOUR_VEHICLE_description","text":"

    Define URDF and parameters in the vehicle description package (refer to the sample vehicle description package for an example).

    "},{"location":"how-to-guides/integrating-autoware/overview/#at-your_vehicle_launch","title":"At YOUR_VEHICLE_launch","text":"

    Create a launch file (refer to the sample vehicle launch package for example). If you have multiple vehicles with the same hardware setup, you can specify vehicle_id to distinguish them.

    "},{"location":"how-to-guides/integrating-autoware/overview/#adapt-your_sensor_kit_launch-for-autoware-launching-system","title":"Adapt YOUR_SENSOR_KIT_launch for autoware launching system","text":""},{"location":"how-to-guides/integrating-autoware/overview/#at-your_sensor_kit_description","title":"At YOUR_SENSOR_KIT_description","text":"

    Define URDF and extrinsic parameters for all the sensors here (refer to the sample sensor kit description package for example). Note that you need to calibrate extrinsic parameters for all the sensors beforehand.

    "},{"location":"how-to-guides/integrating-autoware/overview/#at-your_sensor_kit_launch","title":"At YOUR_SENSOR_KIT_launch","text":"

    Create launch/sensing.launch.xml that launches the interfaces of all the sensors on the vehicle. (refer to the sample sensor kit launch package for example).

    Note

    At this point, you are now able to run Autoware's Planning Simulator to do a basic test of your vehicle and sensing packages. To do so, you need to build and install Autoware using your cloned repository. Follow the steps for either Docker or source installation (starting from the dependency installation step) and then run the following command:

    ros2 launch autoware_launch planning_simulator.launch.xml vehicle_model:=YOUR_VEHICLE sensor_kit:=YOUR_SENSOR_KIT map_path:=/PATH/TO/YOUR/MAP\n
    "},{"location":"how-to-guides/integrating-autoware/overview/#3-create-a-vehicle_interface-package","title":"3. Create a vehicle_interface package","text":"

    You need to create an interface package for your vehicle. The package is expected to provide the following two functions.

    1. Receive command messages from vehicle_cmd_gate and drive the vehicle accordingly
    2. Send vehicle status information to Autoware

    You can find detailed information about the requirements of the vehicle_interface package in the Vehicle Interface design documentation. You can also refer to TIER IV's pacmod_interface repository as an example of a vehicle interface package.

    "},{"location":"how-to-guides/integrating-autoware/overview/#4-create-maps","title":"4. Create maps","text":"

    You need both a pointcloud map and a vector map in order to use Autoware. For more information on map design, please click here.

    "},{"location":"how-to-guides/integrating-autoware/overview/#create-a-pointcloud-map","title":"Create a pointcloud map","text":"

    Use third-party tools such as a LiDAR-based SLAM (Simultaneous Localization And Mapping) package to create a pointcloud map in the .pcd format. For more information, please click here.

    "},{"location":"how-to-guides/integrating-autoware/overview/#create-vector-map","title":"Create vector map","text":"

    Use third-party tools such as TIER IV's Vector Map Builder to create a Lanelet2 format .osm file.

    "},{"location":"how-to-guides/integrating-autoware/overview/#5-launch-autoware","title":"5. Launch Autoware","text":"

    This section briefly explains how to run your vehicle with Autoware.

    "},{"location":"how-to-guides/integrating-autoware/overview/#install-autoware","title":"Install Autoware","text":"

    Follow the installation steps of Autoware.

    "},{"location":"how-to-guides/integrating-autoware/overview/#launch-autoware","title":"Launch Autoware","text":"

    Launch Autoware with the following command:

    ros2 launch autoware_launch autoware.launch.xml vehicle_model:=YOUR_VEHICLE sensor_kit:=YOUR_SENSOR_KIT map_path:=/PATH/TO/YOUR/MAP\n
    "},{"location":"how-to-guides/integrating-autoware/overview/#set-initial-pose","title":"Set initial pose","text":"

    If GNSS is available, Autoware automatically initializes the vehicle's pose.

    If not, you need to set the initial pose using the RViz GUI.

    1. Click the 2D Pose estimate button in the toolbar, or hit the P key
    2. In the 3D View pane, click and hold the left mouse button, and then drag to set the direction for the initial pose.
    "},{"location":"how-to-guides/integrating-autoware/overview/#set-goal-pose","title":"Set goal pose","text":"

    Set a goal pose for the ego vehicle.

    1. Click the 2D Nav Goal button in the toolbar, or hit the G key
    2. In the 3D View pane, click and hold the left mouse button, and then drag to set the direction for the goal pose. If successful, you will see the calculated planning path on RViz.
    "},{"location":"how-to-guides/integrating-autoware/overview/#engage","title":"Engage","text":"

    In your terminal, execute the following command.

    source ~/autoware.YOURS/install/setup.bash\nros2 topic pub /autoware.YOURS/engage autoware_auto_vehicle_msgs/msg/Engage \"engage: true\" -1\n

    You can also engage via RViz with \"AutowareStatePanel\". The panel can be found in Panels > Add New Panel > tier4_state_rviz_plugin > AutowareStatePanel.

    Now the vehicle should drive along the calculated path!

    "},{"location":"how-to-guides/integrating-autoware/overview/#6-tune-parameters-for-your-vehicle-environment","title":"6. Tune parameters for your vehicle & environment","text":"

    You may need to tune your parameters depending on the domain in which you will operate your vehicle.

    The maximum velocity is defined here, that is 15km/h by default.

    "},{"location":"how-to-guides/integrating-autoware/awsim-integration/","title":"How to integrate your vehicle in AWSIM environment","text":""},{"location":"how-to-guides/integrating-autoware/awsim-integration/#how-to-integrate-your-vehicle-in-awsim-environment","title":"How to integrate your vehicle in AWSIM environment","text":""},{"location":"how-to-guides/integrating-autoware/awsim-integration/#overview","title":"Overview","text":"

    AWSIM is an open-source simulator designed by TIER IV for training and evaluating autonomous driving systems. It provides a realistic virtual environment for simulating various real-world scenarios, enabling users to test and refine their autonomous systems before deployment on actual vehicles.

    "},{"location":"how-to-guides/integrating-autoware/awsim-integration/#setup-unity-project","title":"Setup Unity Project","text":"

    To add your environment and vehicle to the AWSIM simulation, you need to set up the Unity environment on your computer. Please follow the steps on the Setup Unity Project documentation page to set up the Unity environment on your computer.

    AWSIM Unity Setup"},{"location":"how-to-guides/integrating-autoware/awsim-integration/#new-vehicle-integration","title":"New Vehicle Integration","text":"

    To incorporate your vehicle into the AWSIM environment, you'll need a 3D model file (.dae, .fbx) of your vehicle. Please refer to the steps on the Add New Vehicle documentation page to add your own vehicle to the AWSIM project environment. During these steps, you'll configure your sensor URDF design on your vehicle. Our tutorial vehicle is shown in the AWSIM environment in the following image.

    Tutorial vehicle in AWSIM Unity Environment"},{"location":"how-to-guides/integrating-autoware/awsim-integration/#environment-integration","title":"Environment Integration","text":"

    Creating custom 3D environments for AWSIM is feasible, but it's recommended to adhere to the .fbx file format. Materials and textures should be stored in separate directories for seamless integration with Unity. This format facilitates material importation and replacement during import. Please refer to the steps on the Add Environment documentation page to add your custom environment to the AWSIM project environment.

    Tutorial vehicle AWSIM Unity Environment"},{"location":"how-to-guides/integrating-autoware/awsim-integration/#others","title":"Others","text":"

    Additionally, you can incorporate traffic and NPCs, generate point cloud maps using lanelet2 maps, and perform other tasks by following the relevant documentation steps provided in the AWSIM documentation.

    "},{"location":"how-to-guides/integrating-autoware/creating-maps/","title":"Creating maps","text":""},{"location":"how-to-guides/integrating-autoware/creating-maps/#creating-maps","title":"Creating maps","text":"

    Autoware requires a pointcloud map and a vector map for the vehicle's operating environment. (Check the map design documentation page for the detailed specification).

    This page explains how users can create maps that can be used for Autoware.

    "},{"location":"how-to-guides/integrating-autoware/creating-maps/#creating-a-point-cloud-map","title":"Creating a point cloud map","text":"

    Traditionally, a Mobile Mapping System (MMS) is used in order to create highly accurate large-scale point cloud maps. However, since a MMS requires high-end sensors for precise positioning, its operational cost can be very expensive and may not be suitable for a relatively small driving environment. Alternatively, a Simultaneous Localization And Mapping (SLAM) algorithm can be used to create a point cloud map from recorded LiDAR scans. Some of the useful open-source SLAM implementations are listed in this page.

    If you prefer proprietary software that is easy to use, you can try a fully automatic mapping tool from MAP IV, Inc., MapIV Engine. They currently provide a trial license for Autoware users free of charge.

    "},{"location":"how-to-guides/integrating-autoware/creating-maps/#creating-a-vector-map","title":"Creating a vector map","text":""},{"location":"how-to-guides/integrating-autoware/creating-maps/#quick-way-to-create-simple-maps","title":"Quick way to create simple maps","text":"

    bag2lanelet is a tool to create virtual lanes from self-location data. Whether in a real environment or a simulation, you can use this tool to generate simple lanelets from a rosbag with self-location information, allowing you to quickly test Autoware's performance. A key use case for this tool is to emulate autonomous driving on routes that were initially driven manually.

    However, it is important to note that this tool has very limited functionalities and can only generate single-lane maps. To enable more comprehensive mapping and navigation capabilities, we recommend using the 'Vector Map Builder' described in the next section.

    "},{"location":"how-to-guides/integrating-autoware/creating-maps/#tools-for-a-vector-map-creation","title":"Tools for a vector map creation","text":"

    The recommended way to create an Autoware-compatible vector map is to use Vector Map Builder, a free web-based tool provided by TIER IV, Inc.. Vector Map Builder allows you to create lanes and add additional regulatory elements such as stop signs or traffic lights using a point cloud map as a reference.

    For open-source software options, MapToolbox is a plugin for Unity specifically designed to create Lanelet2 maps for Autoware. Although JOSM is another open-source tool that can be used to create Lanelet2 maps, be aware that a number of modifications must be done manually to make the map compatible with Autoware. This process can be tedious and time-consuming, so the use of JOSM is not recommended.

    "},{"location":"how-to-guides/integrating-autoware/creating-maps/#autoware-compatible-map-providers","title":"Autoware-compatible map providers","text":"

    If it is not possible to create HD maps yourself, you can use a mapping service from the following Autoware-compatible map providers instead:

    • MAP IV, Inc.
    • AISAN TECHNOLOGY CO., LTD.
    • TomTom

    The table below shows each company's mapping technology and the types of HD maps they support.

    Company Mapping technology Available maps MAP IV, Inc. SLAM Point cloud and vector maps AISAN TECHNOLOGY CO., LTD. MMS Point cloud and vector maps TomTom MMS Vector map*

    Note

    Maps provided by TomTom use their proprietary AutoStream format, not Lanelet2. The open-source AutoStreamForAutoware tool can be used to convert an AutoStream map to a Lanelet2 map. However, the converter is still in its early stages and has some known limitations.

    "},{"location":"how-to-guides/integrating-autoware/creating-maps/converting-utm-to-mgrs-map/","title":"Converting UTM maps to MGRS map format","text":""},{"location":"how-to-guides/integrating-autoware/creating-maps/converting-utm-to-mgrs-map/#converting-utm-maps-to-mgrs-map-format","title":"Converting UTM maps to MGRS map format","text":""},{"location":"how-to-guides/integrating-autoware/creating-maps/converting-utm-to-mgrs-map/#overview","title":"Overview","text":"

    If you want to use MGRS (Military Grid Reference System) format in Autoware, you need to convert UTM (Universal Transverse Mercator) map to MGRS format. In order to do that, we will use UTM to MGRS pointcloud converter ROS 2 package provided by Leo Drive.

    "},{"location":"how-to-guides/integrating-autoware/creating-maps/converting-utm-to-mgrs-map/#installation","title":"Installation","text":""},{"location":"how-to-guides/integrating-autoware/creating-maps/converting-utm-to-mgrs-map/#dependencies","title":"Dependencies","text":"
    • ROS 2
    • PCL-conversions
    • GeographicLib

    To install dependencies:

    sudo apt install ros-humble-pcl-conversions \\\ngeographiclib-tools\n
    "},{"location":"how-to-guides/integrating-autoware/creating-maps/converting-utm-to-mgrs-map/#building","title":"Building","text":"
        cd <PATH-TO-YOUR-ROS-2-WORKSPACE>/src\n    git clone https://github.com/leo-drive/pc_utm_to_mgrs_converter.git\n    cd ..\n    colcon build --symlink-install --cmake-args -DCMAKE_BUILD_TYPE=Release\n
    "},{"location":"how-to-guides/integrating-autoware/creating-maps/converting-utm-to-mgrs-map/#usage","title":"Usage","text":"

    After the installation of converter tool, we need to define northing, easting and ellipsoid height of local UTM map origin in pc_utm_to_mgrs_converter.param.yaml. For example, you can use latitude, longitude and altitude values in the navsatfix message from your GNSS/INS sensor.

    Sample ROS 2 topic echo from navsatfix message
    header:\nstamp:\nsec: 1694612439\nnanosec: 400000000\nframe_id: GNSS_INS/gnss_ins_link\nstatus:\nstatus: 0\nservice: 1\nlatitude: 41.0216110801253\nlongitude: 28.887096461148346\naltitude: 74.28264078891529\nposition_covariance:\n- 0.0014575386885553598\n- 0.0\n- 0.0\n- 0.0\n- 0.004014162812381983\n- 0.0\n- 0.0\n- 0.0\n- 0.0039727711118757725\nposition_covariance_type: 2\n

    After that, you need to convert latitude and longitude values to northing and easting values. You can use any converter on the internet for converting latitude longitude values to UTM. (i.e., UTMconverter)

    Now, we are ready to update pc_utm_to_mgrs_converter.param.yaml, example for our navsatfix message:

    /**:\n  ros__parameters:\n      # Northing of local origin\n-     Northing: 4520550.0\n+     Northing: 4542871.33\n\n     # Easting of local origin\n-     Easting: 698891.0\n+     Easting: 658659.84\n\n     # Elipsoid Height of local origin\n-     ElipsoidHeight: 47.62\n+     ElipsoidHeight: 74.28\n

    Lastly, we will update input and pointcloud the map path in pc_utm_to_mgrs_converter.launch.xml:

    ...\n- <arg name=\"input_file_path\" default=\"/home/melike/projects/autoware_data/gebze_pospac_map/pointcloud_map.pcd\"/>\n+ <arg name=\"input_file_path\" default=\"<PATH-TO-YOUR-INPUT-PCD-MAP>\"/>\n- <arg name=\"output_file_path\" default=\"/home/melike/projects/autoware_data/gebze_pospac_map/pointcloud_map_mgrs_orto.pcd\"/>\n+ <arg name=\"output_file_path\" default=\"<PATH-TO-YOUR-OUTPUT-PCD-MAP>\"/>\n...\n

    After the setting of the package, we will launch pc_utm_to_mgrs_converter:

    ros2 launch pc_utm_to_mgrs_converter pc_utm_to_mgrs_converter.launch.xml\n

    The conversion process will be started, you should see Saved <YOUR-MAP-POINTS-SIZE> data points saved to <YOUR-OUTPUT-MAP-PATH> message on your terminal. MGRS format pointcloud map should be saved on your output map directory.

    "},{"location":"how-to-guides/integrating-autoware/creating-maps/creating-vector-map/","title":"Creating a vector map","text":""},{"location":"how-to-guides/integrating-autoware/creating-maps/creating-vector-map/#creating-a-vector-map","title":"Creating a vector map","text":""},{"location":"how-to-guides/integrating-autoware/creating-maps/creating-vector-map/#overview","title":"Overview","text":"

    In this section, we will explain how to create Lanelet2 maps with TIER IV's Vector Map Builder tool.

    There are alternative tools such as Unity-based app MapToolbox and Java-based app JOSM that you may use for creating a Lanelet2 map. We will be using TIER IV's Vector Map Builder in the tutorial since it works on a browser without installation of extra dependency applications.

    "},{"location":"how-to-guides/integrating-autoware/creating-maps/creating-vector-map/#vector-map-builder","title":"Vector Map Builder","text":"

    You need a TIER IV account for using Vector Map Builder tool. If it is the first time to use the tool, create a TIER IV account in order to use Vector Map Builder tool. For more information about this tool, please check the official guide.

    You can follow these pages for creating a Lanelet2 map and understanding its regulatory elements.

    • Lanelet2
    • Crosswalk
    • Stop Line
    • Traffic Light
    • Speed Bump
    • Detection Area
    "},{"location":"how-to-guides/integrating-autoware/creating-maps/creating-vector-map/crosswalk/","title":"Crosswalk attribute","text":""},{"location":"how-to-guides/integrating-autoware/creating-maps/creating-vector-map/crosswalk/#crosswalk-attribute","title":"Crosswalk attribute","text":"

    Behavior velocity planner's crosswalk module plans velocity to stop or decelerate for pedestrians approaching or walking on a crosswalk. In order to operate that, we will add crosswalk attribute to our lanelet2 map.

    "},{"location":"how-to-guides/integrating-autoware/creating-maps/creating-vector-map/crosswalk/#creating-a-crosswalk-attribute","title":"Creating a crosswalk attribute","text":"

    In order to create a crosswalk on your map, please follow these steps:

    1. Click Abstraction button on top panel.
    2. Select Crosswalk from the panel.
    3. Click and draw crosswalk on your pointcloud map.

    You can see these steps in the crosswalk creating demonstration video:

    "},{"location":"how-to-guides/integrating-autoware/creating-maps/creating-vector-map/crosswalk/#testing-created-crosswalk-with-planning-simulator","title":"Testing created crosswalk with planning simulator","text":"

    After the completing of creating the map, we need to save it. To that please click File --> Export Lanelet2Maps then download.

    After the download is finished, we need to put lanelet2 map and pointcloud map on the same location. The directory structure should be like this:

    + <YOUR-MAP-DIRECTORY>/\n+  \u251c\u2500 pointcloud_map.pcd\n+  \u2514\u2500 lanelet2_map.osm\n

    If your .osm or .pcd map file's name is different from these names, you need to update autoware.launch.xml:

      <!-- Map -->\n-  <arg name=\"lanelet2_map_file\" default=\"lanelet2_map.osm\" description=\"lanelet2 map file name\"/>\n+  <arg name=\"lanelet2_map_file\" default=\"<YOUR-LANELET-MAP-NAME>.osm\" description=\"lanelet2 map file name\"/>\n-  <arg name=\"pointcloud_map_file\" default=\"pointcloud_map.pcd\" description=\"pointcloud map file name\"/>\n+  <arg name=\"pointcloud_map_file\" default=\"<YOUR-POINTCLOUD-MAP-NAME>.pcd\" description=\"pointcloud map file name\"/>\n

    Now we are ready to launch the planning simulator:

    ros2 launch autoware_launch planning_simulator.launch.xml map_path:=<YOUR-MAP-FOLDER-DIR> vehicle_model:=<YOUR-VEHICLE-MODEL> sensor_model:=<YOUR-SENSOR-KIT>\n

    Example for tutorial_vehicle:

    ros2 launch autoware_launch planning_simulator.launch.xml map_path:=$HOME/Files/autoware_map/tutorial_map/ vehicle_model:=tutorial_vehicle sensor_model:=tutorial_vehicle_sensor_kit vehicle_id:=tutorial_vehicle\n
    1. Click 2D Pose Estimate button on rviz or press P and give a pose for initialization.
    2. Click 2D Goal Pose button on rviz or press G and give a pose for goal point.
    3. We need to add pedestrians to crosswalk, so activate interactive pedestrians from Tool Properties panel on rviz.
    4. After that, please press Shift, then click right click button for inserting pedestrians.
    5. You can control inserted pedestrian via dragging right click.

    Crosswalk markers on rviz:

    Crosswalk test on the created map.

    You can check your crosswalk elements in the planning simulator as this demonstration video:

    "},{"location":"how-to-guides/integrating-autoware/creating-maps/creating-vector-map/detection-area/","title":"Detection area element","text":""},{"location":"how-to-guides/integrating-autoware/creating-maps/creating-vector-map/detection-area/#detection-area-element","title":"Detection area element","text":"

    Behavior velocity planner's detection area plans velocity when if pointcloud is detected in a detection area defined on a map, the stop planning will be executed at the predetermined point. In order to operate that, we will add a detection area element to our lanelet2 map.

    "},{"location":"how-to-guides/integrating-autoware/creating-maps/creating-vector-map/detection-area/#creating-a-detection-area-element","title":"Creating a detection area element","text":"

    In order to create a detection area on your map, please follow these steps:

    1. Click Lanelet2Maps button on top panel.
    2. Select Detection Area from the panel.
    3. Please select lanelet which stop line to be added.
    4. Click and insert Detection Area on your pointcloud map.
    5. You can change the dimensions of the detection area with clicking points on the corners of the detection area. For more information, you can check the demonstration video.

    You can see these steps in the detection area creating demonstration video:

    "},{"location":"how-to-guides/integrating-autoware/creating-maps/creating-vector-map/detection-area/#testing-created-detection-area-with-planning-simulator","title":"Testing created detection area with planning simulator","text":"

    After the completing of creating the map, we need to save it. To that please click File --> Export Lanelet2Maps then download.

    After the download is finished, we need to put lanelet2 map and pointcloud map on the same location. The directory structure should be like this:

    + <YOUR-MAP-DIRECTORY>/\n+  \u251c\u2500 pointcloud_map.pcd\n+  \u2514\u2500 lanelet2_map.osm\n

    If your .osm or .pcd map file's name is different from these names, you need to update autoware.launch.xml:

      <!-- Map -->\n-  <arg name=\"lanelet2_map_file\" default=\"lanelet2_map.osm\" description=\"lanelet2 map file name\"/>\n+  <arg name=\"lanelet2_map_file\" default=\"<YOUR-LANELET-MAP-NAME>.osm\" description=\"lanelet2 map file name\"/>\n-  <arg name=\"pointcloud_map_file\" default=\"pointcloud_map.pcd\" description=\"pointcloud map file name\"/>\n+  <arg name=\"pointcloud_map_file\" default=\"<YOUR-POINTCLOUD-MAP-NAME>.pcd\" description=\"pointcloud map file name\"/>\n

    Now we are ready to launch the planning simulator:

    ros2 launch autoware_launch planning_simulator.launch.xml map_path:=<YOUR-MAP-FOLDER-DIR> vehicle_model:=<YOUR-VEHICLE-MODEL> sensor_model:=<YOUR-SENSOR-KIT>\n

    Example for tutorial_vehicle:

    ros2 launch autoware_launch planning_simulator.launch.xml map_path:=$HOME/Files/autoware_map/tutorial_map/ vehicle_model:=tutorial_vehicle sensor_model:=tutorial_vehicle_sensor_kit vehicle_id:=tutorial_vehicle\n
    1. Click 2D Pose Estimate button on rviz or press P and give a pose for initialization.
    2. Click 2D Goal Pose button on rviz or press G and give a pose for goal point.
    3. We need to add pedestrians to detection area, so activate interactive pedestrians from Tool Properties panel on rviz.
    4. After that, please press Shift, then click right click button for inserting pedestrians.
    5. You can control inserted pedestrian via dragging right click. So, you should put pedestrian on the detection area for testing.

    Stop detection area on rviz:

    Detection area test on the created map.

    You can check your detection area elements in the planning simulator as this demonstration video:

    "},{"location":"how-to-guides/integrating-autoware/creating-maps/creating-vector-map/lanelet2/","title":"Creating a Lanelet","text":""},{"location":"how-to-guides/integrating-autoware/creating-maps/creating-vector-map/lanelet2/#creating-a-lanelet","title":"Creating a Lanelet","text":"

    At this page, we will explain how to create a simple lanelet on your point cloud map. If you didn't have a point cloud map before, please check and follow the steps on the LIO-SAM mapping page for how to create a point cloud map for Autoware.

    "},{"location":"how-to-guides/integrating-autoware/creating-maps/creating-vector-map/lanelet2/#creating-a-lanelet2","title":"Creating a Lanelet2","text":"

    Firstly, we need to import our pointcloud map to Vector Map Builder tool:

    1. Please click File.
    2. Then, click Import PCD.
    3. Click Browse and select your .pcd file.

    You will display the point cloud on your Vector Map Builder tool after the upload is complete:

    Uploaded pointcloud map file on Vector Map Builder

    Now, we are ready to create lanelet2 map on our pointcloud map:

    1. Please click Create.
    2. Then, click Create Lanelet2Maps.
    3. Please fill your map name
    4. Please fill your MGRS zone. (At tutorial_vehicle, MGRS grid zone: 35T - MGRS 100,000-meter square: PF)
    5. Click Create.
    "},{"location":"how-to-guides/integrating-autoware/creating-maps/creating-vector-map/lanelet2/#creating-a-simple-lanelet","title":"Creating a simple lanelet","text":"

    In order to create a simple lanelet on your map, please follow these steps:

    1. CLick Lanelet2Maps on the bar
    2. Enable Lanelet mode via selecting Lanelet.
    3. Then, you can click the pointcloud map to create lanelet.
    4. If your lanelet is finished, you can disable Lanelet.
    5. If you want to change your lanelet width, click lanelet --> Change Lanelet Width, then you can enter the lanelet width.

    Video Demonstration:

    "},{"location":"how-to-guides/integrating-autoware/creating-maps/creating-vector-map/lanelet2/#join-two-lanelets","title":"Join two lanelets","text":"

    In order to join two lanelets, please follow these steps:

    1. Please create two distinct lanelet.
    2. Select a Lanelet, then press Shift and select other lanelet.
    3. Now, you can see Join Lanelets button, just press it.
    4. These lanelets will be joined.

    Video Demonstration:

    "},{"location":"how-to-guides/integrating-autoware/creating-maps/creating-vector-map/lanelet2/#join-multiple-lanelets","title":"Join Multiple lanelets","text":"

    In order to add (join) two or more lanelets to another lanelet, please follow these steps:

    1. Create multiple lanelets.
    2. You can join the first two lanelets like the steps before.
    3. Please check end points ids of first lanelet.
    4. Then you need to change these ids with third lanelet's start point. (Please change with selecting linestring of lanelet)
    5. You will see two next lanes of the first lanelet will be appeared.

    Video Demonstration:

    "},{"location":"how-to-guides/integrating-autoware/creating-maps/creating-vector-map/lanelet2/#change-speed-limit-of-lanelet","title":"Change Speed Limit Of Lanelet","text":"

    In order to change the speed limit of lanelet, please follow these steps:

    1. Select the lanelet where the speed limit will be changed
    2. Set speed limit on the right panel.
    "},{"location":"how-to-guides/integrating-autoware/creating-maps/creating-vector-map/lanelet2/#test-lanelets-with-planning-simulator","title":"Test lanelets with planning simulator","text":"

    After the completing of creating lanelets, we need to save it. To that please click File --> Export Lanelet2Maps then download.

    After the download is finished, we need to put lanelet2 map and pointcloud map on the same location. The directory structure should be like this:

    <YOUR-MAP-DIRECTORY>/\n \u251c\u2500 pointcloud_map.pcd\n \u2514\u2500 lanelet2_map.osm\n

    If your .osm or .pcd map file's name is different from these names, you need to update autoware.launch.xml:

      <!-- Map -->\n-  <arg name=\"lanelet2_map_file\" default=\"lanelet2_map.osm\" description=\"lanelet2 map file name\"/>\n+  <arg name=\"lanelet2_map_file\" default=\"<YOUR-LANELET-MAP-NAME>.osm\" description=\"lanelet2 map file name\"/>\n-  <arg name=\"pointcloud_map_file\" default=\"pointcloud_map.pcd\" description=\"pointcloud map file name\"/>\n+  <arg name=\"pointcloud_map_file\" default=\"<YOUR-POINTCLOUD-MAP-NAME>.pcd\" description=\"pointcloud map file name\"/>\n

    Now we are ready to launch the planning simulator:

    ros2 launch autoware_launch planning_simulator.launch.xml map_path:=<YOUR-MAP-FOLDER-DIR> vehicle_model:=<YOUR-VEHICLE-MODEL> sensor_model:=<YOUR-SENSOR-KIT>\n

    Example for tutorial_vehicle:

    ros2 launch autoware_launch planning_simulator.launch.xml map_path:=$HOME/Files/autoware_map/tutorial_map/ vehicle_model:=tutorial_vehicle sensor_model:=tutorial_vehicle_sensor_kit vehicle_id:=tutorial_vehicle\n
    1. Click 2D Pose Estimate button on rviz or press P and give a pose for initialization.
    2. Click 2D Goal Pose button on rviz or press G and give a pose for goal point.

    Testing our created vector map with planning simulator"},{"location":"how-to-guides/integrating-autoware/creating-maps/creating-vector-map/speed-bump/","title":"Speed bump","text":""},{"location":"how-to-guides/integrating-autoware/creating-maps/creating-vector-map/speed-bump/#speed-bump","title":"Speed bump","text":"

    Behavior velocity planner's speed bump module plans velocity to slow down before speed bump for comfortable and safety driving. In order to operate that, we will add speed bumps to our lanelet2 map.

    "},{"location":"how-to-guides/integrating-autoware/creating-maps/creating-vector-map/speed-bump/#creating-a-speed-bump-element","title":"Creating a speed bump element","text":"

    In order to create a speed bump on your pointcloud map, please follow these steps:

    1. Select Linestring from Lanelet2Maps section.
    2. Click and draw polygon for speed bump.
    3. Then please disable Linestring from Lanelet2Maps section.
    4. CLick Change to Polygon from the Action panel.
    5. Please select this Polygon and enter speed_bump as the type.
    6. Then, please click lanelet which speed bump to be added.
    7. Select Create General Regulatory ELement.
    8. Go to this element, and please enter speed_bump as subtype.
    9. Click Add refers and type your created speed bump polygon ID.

    You can see these steps in the speed bump creating demonstration video:

    "},{"location":"how-to-guides/integrating-autoware/creating-maps/creating-vector-map/speed-bump/#testing-created-the-speed-bump-element-with-planning-simulator","title":"Testing created the speed bump element with planning simulator","text":"

    After the completing of creating the map, we need to save it. To that please click File --> Export Lanelet2Maps then download.

    After the download is finished, we need to put lanelet2 map and pointcloud map on the same location. The directory structure should be like this:

    + <YOUR-MAP-DIRECTORY>/\n+  \u251c\u2500 pointcloud_map.pcd\n+  \u2514\u2500 lanelet2_map.osm\n

    If your .osm or .pcd map file's name is different from these names, you need to update autoware.launch.xml:

      <!-- Map -->\n-  <arg name=\"lanelet2_map_file\" default=\"lanelet2_map.osm\" description=\"lanelet2 map file name\"/>\n+  <arg name=\"lanelet2_map_file\" default=\"<YOUR-LANELET-MAP-NAME>.osm\" description=\"lanelet2 map file name\"/>\n-  <arg name=\"pointcloud_map_file\" default=\"pointcloud_map.pcd\" description=\"pointcloud map file name\"/>\n+  <arg name=\"pointcloud_map_file\" default=\"<YOUR-POINTCLOUD-MAP-NAME>.pcd\" description=\"pointcloud map file name\"/>\n

    Note

    The speed bump module not enabled default. To enable that, please uncomment it your behavior_velocity_planner.param.yaml.

    Now we are ready to launch the planning simulator:

    ros2 launch autoware_launch planning_simulator.launch.xml map_path:=<YOUR-MAP-FOLDER-DIR> vehicle_model:=<YOUR-VEHICLE-MODEL> sensor_model:=<YOUR-SENSOR-KIT>\n

    Example for tutorial_vehicle:

    ros2 launch autoware_launch planning_simulator.launch.xml map_path:=$HOME/Files/autoware_map/tutorial_map/ vehicle_model:=tutorial_vehicle sensor_model:=tutorial_vehicle_sensor_kit vehicle_id:=tutorial_vehicle\n
    1. Click 2D Pose Estimate button on rviz or press P and give a pose for initialization.
    2. Click 2D Goal Pose button on rviz or press G and give a pose for goal point.
    3. You can see the speed bump marker on the rviz screen.

    Speed bump markers on rviz:

    Speed bump test on the created map.

    You can check your speed bump elements in the planning simulator as this demonstration video:

    "},{"location":"how-to-guides/integrating-autoware/creating-maps/creating-vector-map/stop-line/","title":"Stop Line","text":""},{"location":"how-to-guides/integrating-autoware/creating-maps/creating-vector-map/stop-line/#stop-line","title":"Stop Line","text":"

    Behavior velocity planner's stop line module plans velocity to stop right before stop lines and restart driving after stopped. In order to operate that, we will add stop line attribute to our lanelet2 map.

    "},{"location":"how-to-guides/integrating-autoware/creating-maps/creating-vector-map/stop-line/#creating-a-stop-line-regulatory-element","title":"Creating a stop line regulatory element","text":"

    In order to create a stop line on your pointcloud map, please follow these steps:

    1. Please select lanelet which stop line to be added.
    2. Click Abstraction button on top panel.
    3. Select Stop Line from the panel.
    4. Click on the desired area for inserting stop line.

    You can see these steps in the stop line creating demonstration video:

    "},{"location":"how-to-guides/integrating-autoware/creating-maps/creating-vector-map/stop-line/#testing-created-the-stop-line-element-with-planning-simulator","title":"Testing created the stop line element with planning simulator","text":"

    After the completing of creating the map, we need to save it. To that please click File --> Export Lanelet2Maps then download.

    After the download is finished, we need to put lanelet2 map and pointcloud map on the same location. The directory structure should be like this:

    + <YOUR-MAP-DIRECTORY>/\n+  \u251c\u2500 pointcloud_map.pcd\n+  \u2514\u2500 lanelet2_map.osm\n

    If your .osm or .pcd map file's name is different from these names, you need to update autoware.launch.xml:

      <!-- Map -->\n-  <arg name=\"lanelet2_map_file\" default=\"lanelet2_map.osm\" description=\"lanelet2 map file name\"/>\n+  <arg name=\"lanelet2_map_file\" default=\"<YOUR-LANELET-MAP-NAME>.osm\" description=\"lanelet2 map file name\"/>\n-  <arg name=\"pointcloud_map_file\" default=\"pointcloud_map.pcd\" description=\"pointcloud map file name\"/>\n+  <arg name=\"pointcloud_map_file\" default=\"<YOUR-POINTCLOUD-MAP-NAME>.pcd\" description=\"pointcloud map file name\"/>\n

    Now we are ready to launch the planning simulator:

    ros2 launch autoware_launch planning_simulator.launch.xml map_path:=<YOUR-MAP-FOLDER-DIR> vehicle_model:=<YOUR-VEHICLE-MODEL> sensor_model:=<YOUR-SENSOR-KIT>\n

    Example for tutorial_vehicle:

    ros2 launch autoware_launch planning_simulator.launch.xml map_path:=$HOME/Files/autoware_map/tutorial_map/ vehicle_model:=tutorial_vehicle sensor_model:=tutorial_vehicle_sensor_kit vehicle_id:=tutorial_vehicle\n
    1. Click 2D Pose Estimate button on rviz or press P and give a pose for initialization.
    2. Click 2D Goal Pose button on rviz or press G and give a pose for goal point.
    3. You can see the stop line marker on the rviz screen.

    Stop line markers on rviz:

    Stop line test on the created map.

    You can check your stop line elements in the planning simulator as this demonstration video:

    "},{"location":"how-to-guides/integrating-autoware/creating-maps/creating-vector-map/traffic-light/","title":"Traffic light","text":""},{"location":"how-to-guides/integrating-autoware/creating-maps/creating-vector-map/traffic-light/#traffic-light","title":"Traffic light","text":"

    Behavior velocity planner's traffic light module plans velocity according to the traffic light status. In order to operate that, we will add traffic light attribute to our lanelet2 map.

    "},{"location":"how-to-guides/integrating-autoware/creating-maps/creating-vector-map/traffic-light/#creating-a-traffic-light-regulatory-element","title":"Creating a traffic light regulatory element","text":"

    In order to create a traffic light on your pointcloud map, please follow these steps:

    1. Please select lanelet which traffic light to be added.
    2. Click Abstraction button on top panel.
    3. Select Traffic Light from the panel.
    4. Click on the desired area for inserting traffic light.

    You can see these steps in the traffic-light creating demonstration video:

    "},{"location":"how-to-guides/integrating-autoware/creating-maps/creating-vector-map/traffic-light/#testing-created-the-traffic-light-element-with-planning-simulator","title":"Testing created the traffic light element with planning simulator","text":"

    After the completing of creating the map, we need to save it. To that please click File --> Export Lanelet2Maps then download.

    After the download is finished, we need to put lanelet2 map and pointcloud map on the same location. The directory structure should be like this:

    + <YOUR-MAP-DIRECTORY>/\n+  \u251c\u2500 pointcloud_map.pcd\n+  \u2514\u2500 lanelet2_map.osm\n

    If your .osm or .pcd map file's name is different from these names, you need to update autoware.launch.xml:

      <!-- Map -->\n-  <arg name=\"lanelet2_map_file\" default=\"lanelet2_map.osm\" description=\"lanelet2 map file name\"/>\n+  <arg name=\"lanelet2_map_file\" default=\"<YOUR-LANELET-MAP-NAME>.osm\" description=\"lanelet2 map file name\"/>\n-  <arg name=\"pointcloud_map_file\" default=\"pointcloud_map.pcd\" description=\"pointcloud map file name\"/>\n+  <arg name=\"pointcloud_map_file\" default=\"<YOUR-POINTCLOUD-MAP-NAME>.pcd\" description=\"pointcloud map file name\"/>\n

    Now we are ready to launch the planning simulator:

    ros2 launch autoware_launch planning_simulator.launch.xml map_path:=<YOUR-MAP-FOLDER-DIR> vehicle_model:=<YOUR-VEHICLE-MODEL> sensor_model:=<YOUR-SENSOR-KIT>\n

    Example for tutorial_vehicle:

    ros2 launch autoware_launch planning_simulator.launch.xml map_path:=$HOME/Files/autoware_map/tutorial_map/ vehicle_model:=tutorial_vehicle sensor_model:=tutorial_vehicle_sensor_kit vehicle_id:=tutorial_vehicle\n
    1. Click 2D Pose Estimate button on rviz or press P and give a pose for initialization.
    2. Click Panels -> Add new panel, select TrafficLightPublishPanel, and then press OK.
    3. In TrafficLightPublishPanel, set the ID and color of the traffic light.
    4. Then, Click SET and PUBLISH button.
    5. Click 2D Goal Pose button on rviz or press G and give a pose for goal point.
    6. You can see the traffic light marker on the rviz screen if you set the traffic light color as RED.

    Traffic Light markers on rviz:

    Traffic light test on the created map.

    You can check your traffic light elements in the planning simulator as this demonstration video:

    "},{"location":"how-to-guides/integrating-autoware/creating-maps/open-source-slam/","title":"Available Open Source SLAM","text":""},{"location":"how-to-guides/integrating-autoware/creating-maps/open-source-slam/#available-open-source-slam","title":"Available Open Source SLAM","text":"

    This page provides the list of available open source Simultaneous Localization And Mapping (SLAM) implementation that can be used to generate a point cloud (.pcd) map file.

    "},{"location":"how-to-guides/integrating-autoware/creating-maps/open-source-slam/#selecting-which-implementation-to-use","title":"Selecting which implementation to use","text":"

    Lidar odometry drifts accumulatively as time goes by and there is solutions to solve that problem such as graph optimization, loop closure and using gps sensor to decrease accumulative drift error. Because of that, a SLAM algorithm should have loop closure feature, graph optimization and should use gps sensor. Additionally, some of the algorithms are using IMU sensor to add another factor to graph for decreasing drift error. While some of the algorithms requires 9-axis IMU sensor strictly, some of them requires only 6-axis IMU sensor or not even using the IMU sensor. Before choosing an algorithm to create maps for Autoware please consider these factors depends on your sensor setup or expected quality of generated map.

    "},{"location":"how-to-guides/integrating-autoware/creating-maps/open-source-slam/#tips","title":"Tips","text":"

    Commonly used open-source SLAM implementations are lidarslam-ros2 (LiDAR, IMU*) and LIO-SAM (LiDAR, IMU, GNSS). The required sensor data for each algorithm is specified in the parentheses, where an asterisk (*) indicates that such sensor data is optional. For supported LiDAR models, please check the GitHub repository of each algorithm. While these ROS 2-based SLAM implementations can be easily installed and used directly on the same machine that runs Autoware, it is important to note that they may not be as well-tested or as mature as ROS 1-based alternatives.

    The notable open-source SLAM implementations that are based on ROS 1 include hdl-graph-slam (LiDAR, IMU*, GNSS*), LeGO-LOAM (LiDAR, IMU*), LeGO-LOAM-BOR (LiDAR), and LIO-SAM (LiDAR, IMU, GNSS).

    Most of these algorithms already have a built-in loop-closure and pose graph optimization. However, if the built-in, automatic loop-closure fails or does not work correctly, you can use Interactive SLAM to adjust and optimize a pose graph manually.

    "},{"location":"how-to-guides/integrating-autoware/creating-maps/open-source-slam/#list-of-third-party-slam-implementations","title":"List of Third Party SLAM Implementations","text":"Package Name Explanation Repository Link Loop Closure Sensors ROS Version Dependencies DLIO Direct LiDAR-Inertial Odometry is a new lightweight LiDAR-inertial odometry algorithm with a novel coarse-to-fine approach in constructing continuous-time trajectories for precise motion correction github.com/vectr-ucla/direct_lidar_inertial_odometry \u2714\ufe0f LidarIMU ROS 2 PCLEigenOpenMP FAST-LIO-LC A computationally efficient and robust LiDAR-inertial odometry package with loop closure module and graph optimization github.com/yanliang-wang/FAST_LIO_LC \u2714\ufe0f LidarIMUGPS [Optional] ROS 1 ROS MelodicPCL >= 1.8Eigen >= 3.3.4GTSAM >= 4.0.0 FAST_LIO_SLAM FAST_LIO_SLAM is the integration of FAST_LIO and SC-PGO which is scan context based loop detection and GTSAM based pose-graph optimization github.com/gisbi-kim/FAST_LIO_SLAM \u2714\ufe0f LidarIMUGPS [Optional] ROS 1 PCL >= 1.8Eigen >= 3.3.4 FD-SLAM FD_SLAM is Feature&Distribution-based 3D LiDAR SLAM method based on Surface Representation Refinement. In this algorithm novel feature-based Lidar odometry used for fast scan-matching, and used a proposed UGICP method for keyframe matching github.com/SLAMWang/FD-SLAM \u2714\ufe0f LidarIMU [Optional]GPS ROS 1 PCLg2oSuitesparse GenZ-ICP GenZ-ICP is a Generalizable and Degeneracy-Robust LiDAR Odometry Using an Adaptive Weighting github.com/cocel-postech/genz-icp \u274c Lidar ROS 2 No extra dependency hdl_graph_slam An open source ROS package for real-time 6DOF SLAM using a 3D LIDAR. It is based on 3D Graph SLAM with NDT scan matching-based odometry estimation and loop detection. It also supports several graph constraints, such as GPS, IMU acceleration (gravity vector), IMU orientation (magnetic sensor), and floor plane (detected in a point cloud) github.com/koide3/hdl_graph_slam \u2714\ufe0f LidarIMU [Optional]GPS [Optional] ROS 1 PCLg2oOpenMP IA-LIO-SAM IA_LIO_SLAM is created for data acquisition in unstructured environment and it is a framework for Intensity and Ambient Enhanced Lidar Inertial Odometry via Smoothing and Mapping that achieves highly accurate robot trajectories and mapping github.com/minwoo0611/IA_LIO_SAM \u2714\ufe0f LidarIMUGPS ROS 1 GTSAM ISCLOAM ISCLOAM presents a robust loop closure detection approach by integrating both geometry and intensity information github.com/wh200720041/iscloam \u2714\ufe0f Lidar ROS 1 Ubuntu 18.04ROS MelodicCeresPCLGTSAMOpenCV KISS-ICP A simple and fast ICP algorithm for 3D point cloud registration github.com/PRBonn/kiss-icp \u274c Lidar ROS 2 No extra dependency Kinematic-ICP Kinematic-ICP is a LiDAR odometry approach that explicitly incorporates the kinematic constraints of mobile robots into the classic point-to-point ICP algorithm. github.com/PRBonn/kinematic-icp \u274c LidarOdometry ROS 2 No extra dependency LeGO-LOAM-BOR LeGO-LOAM-BOR is improved version of the LeGO-LOAM by improving quality of the code, making it more readable and consistent. Also, performance is improved by converting processes to multi-threaded approach github.com/facontidavide/LeGO-LOAM-BOR ROS2 fork: /github.com/eperdices/LeGO-LOAM-SR \u2714\ufe0f LidarIMU ROS 1ROS 2 ROS 1/2PCLGTSAM LIO_SAM A framework that achieves highly accurate, real-time mobile robot trajectory estimation and map-building. It formulates lidar-inertial odometry atop a factor graph, allowing a multitude of relative and absolute measurements, including loop closures, to be incorporated from different sources as factors into the system github.com/TixiaoShan/LIO-SAM \u2714\ufe0f LidarIMUGPS [Optional] ROS 1ROS 2 PCLGTSAM li_slam_ros2 li_slam package is a combination of lidarslam_ros2 and the LIO-SAM IMU composite method. github.com/rsasaki0109/li_slam_ros2 \u2714\ufe0f LidarIMUGPS [Optional] ROS 2 PCLGTSAM Optimized-SC-F-LOAM An improved version of F-LOAM and uses an adaptive threshold to further judge the loop closure detection results and reducing false loop closure detections. Also it uses feature point-based matching to calculate the constraints between a pair of loop closure frame point clouds and decreases time consumption of constructing loop frame constraints github.com/SlamCabbage/Optimized-SC-F-LOAM \u2714\ufe0f Lidar ROS 1 PCLGTSAMCeres SC-A-LOAM A real-time LiDAR SLAM package that integrates A-LOAM and ScanContext. github.com/gisbi-kim/SC-A-LOAM \u2714\ufe0f Lidar ROS 1 GTSAM >= 4.0 SC-LeGO-LOAM SC-LeGO-LOAM integrated LeGO-LOAM for lidar odometry and 2 different loop closure methods: ScanContext and Radius search based loop closure. While ScanContext is correcting large drifts, radius search based method is good for fine-stitching github.com/gisbi-kim/SC-LeGO-LOAM \u2714\ufe0f LidarIMU ROS 1 PCLGTSAM"},{"location":"how-to-guides/integrating-autoware/creating-maps/open-source-slam/fast-lio-lc/","title":"FAST_LIO_LC","text":""},{"location":"how-to-guides/integrating-autoware/creating-maps/open-source-slam/fast-lio-lc/#fast_lio_lc","title":"FAST_LIO_LC","text":""},{"location":"how-to-guides/integrating-autoware/creating-maps/open-source-slam/fast-lio-lc/#what-is-fast_lio_lc","title":"What is FAST_LIO_LC?","text":"
    • A computationally efficient and robust LiDAR-inertial odometry package with loop closure module and graph optimization.
    "},{"location":"how-to-guides/integrating-autoware/creating-maps/open-source-slam/fast-lio-lc/#repository-information","title":"Repository Information","text":""},{"location":"how-to-guides/integrating-autoware/creating-maps/open-source-slam/fast-lio-lc/#original-repository-link","title":"Original Repository link","text":"

    https://github.com/yanliang-wang/FAST_LIO_LC

    "},{"location":"how-to-guides/integrating-autoware/creating-maps/open-source-slam/fast-lio-lc/#required-sensors","title":"Required Sensors","text":"
    • LIDAR [Velodyne, Ouster, Livox]
    • IMU [6-AXIS, 9-AXIS]
    • GPS [Optional]
    "},{"location":"how-to-guides/integrating-autoware/creating-maps/open-source-slam/fast-lio-lc/#ros-compatibility","title":"ROS Compatibility","text":"
    • ROS 1
    "},{"location":"how-to-guides/integrating-autoware/creating-maps/open-source-slam/fast-lio-lc/#dependencies","title":"Dependencies","text":"
    • Ubuntu 18.04
    • ROS Melodic
    • PCL >= 1.8, Follow PCL Installation.
    • Eigen >= 3.3.4, Follow Eigen Installation.
    • GTSAM >= 4.0.0, Follow GTSAM Installation.
      wget -O ~/Downloads/gtsam.zip https://github.com/borglab/gtsam/archive/4.0.0-alpha2.zip\n  cd ~/Downloads/ && unzip gtsam.zip -d ~/Downloads/\n  cd ~/Downloads/gtsam-4.0.0-alpha2/\n  mkdir build && cd build\n  cmake ..\n  sudo make install\n
    "},{"location":"how-to-guides/integrating-autoware/creating-maps/open-source-slam/fast-lio-lc/#build-run","title":"Build & Run","text":""},{"location":"how-to-guides/integrating-autoware/creating-maps/open-source-slam/fast-lio-lc/#1-build","title":"1) Build","text":"
        mkdir -p ~/ws_fastlio_lc/src\n    cd ~/ws_fastlio_lc/src\n    git clone https://github.com/gisbi-kim/FAST_LIO_SLAM.git\n    git clone https://github.com/Livox-SDK/livox_ros_driver\n    cd ..\n    catkin_make\n
    "},{"location":"how-to-guides/integrating-autoware/creating-maps/open-source-slam/fast-lio-lc/#2-set-parameters","title":"2) Set parameters","text":"
    • After downloading the repository, change topic and sensor settings on the config file (workspace/src/FAST_LIO_LC/FAST_LIO/config/ouster64_mulran.yaml) with the lidar topic name in your bag file.
    • For imu-lidar compatibility, extrinsic matrices from calibration must be changed.
    • To enable auto-save, pcd_save_enable must be 1 from the launch file (workspace/src/FAST_LIO_LC/FAST_LIO/launch/mapping_ouster64_mulran.launch).
    "},{"location":"how-to-guides/integrating-autoware/creating-maps/open-source-slam/fast-lio-lc/#3-run","title":"3) Run","text":"
    • For Ouster OS1-64
      # open new terminal: run FAST-LIO\nroslaunch fast_lio mapping_ouster64.launch\n\n# open the other terminal tab: run SC-PGO\nroslaunch aloam_velodyne fastlio_ouster64.launch\n\n# play bag file in the other terminal\nrosbag play RECORDED_BAG.bag --clock\n
    "},{"location":"how-to-guides/integrating-autoware/creating-maps/open-source-slam/fast-lio-lc/#example-result","title":"Example Result","text":""},{"location":"how-to-guides/integrating-autoware/creating-maps/open-source-slam/fast-lio-lc/#other-examples","title":"Other Examples","text":""},{"location":"how-to-guides/integrating-autoware/creating-maps/open-source-slam/fast-lio-lc/#example-dataset","title":"Example dataset","text":"

    Check original repository link for example dataset.

    "},{"location":"how-to-guides/integrating-autoware/creating-maps/open-source-slam/fast-lio-lc/#contact","title":"Contact","text":"
    • Maintainer: Yanliang Wang (wyl410922@qq.com)
    "},{"location":"how-to-guides/integrating-autoware/creating-maps/open-source-slam/fast-lio-lc/#acknowledgements","title":"Acknowledgements","text":"
    • Thanks for FAST_LIO authors.
    "},{"location":"how-to-guides/integrating-autoware/creating-maps/open-source-slam/fast-lio-slam/","title":"FAST_LIO_SLAM","text":""},{"location":"how-to-guides/integrating-autoware/creating-maps/open-source-slam/fast-lio-slam/#fast_lio_slam","title":"FAST_LIO_SLAM","text":""},{"location":"how-to-guides/integrating-autoware/creating-maps/open-source-slam/fast-lio-slam/#what-is-fast_lio_slam","title":"What is FAST_LIO_SLAM?","text":"
    • FAST_LIO_SLAM is the integration of FAST_LIO and SC-PGO which is scan context based loop detection and GTSAM based pose-graph optimization.
    "},{"location":"how-to-guides/integrating-autoware/creating-maps/open-source-slam/fast-lio-slam/#repository-information","title":"Repository Information","text":""},{"location":"how-to-guides/integrating-autoware/creating-maps/open-source-slam/fast-lio-slam/#original-repository-link","title":"Original Repository link","text":"

    https://github.com/gisbi-kim/FAST_LIO_SLAM

    "},{"location":"how-to-guides/integrating-autoware/creating-maps/open-source-slam/fast-lio-slam/#required-sensors","title":"Required Sensors","text":"
    • LIDAR [Livox, Velodyne, Ouster]
    • IMU [6-AXIS, 9-AXIS]
    • GPS [OPTIONAL]
    "},{"location":"how-to-guides/integrating-autoware/creating-maps/open-source-slam/fast-lio-slam/#ros-compatibility","title":"ROS Compatibility","text":"
    • ROS 1
    "},{"location":"how-to-guides/integrating-autoware/creating-maps/open-source-slam/fast-lio-slam/#dependencies","title":"Dependencies","text":"
    • ROS
    • PCL
    • GTSAM
    wget -O ~/Downloads/gtsam.zip https://github.com/borglab/gtsam/archive/4.0.0-alpha2.zip\ncd ~/Downloads/ && unzip gtsam.zip -d ~/Downloads/\ncd ~/Downloads/gtsam-4.0.0-alpha2/\nmkdir build && cd build\ncmake ..\nsudo make install\n
    • PCL >= 1.8, Follow PCL Installation.
    • Eigen >= 3.3.4, Follow Eigen Installation.
    "},{"location":"how-to-guides/integrating-autoware/creating-maps/open-source-slam/fast-lio-slam/#build-run","title":"Build & Run","text":""},{"location":"how-to-guides/integrating-autoware/creating-maps/open-source-slam/fast-lio-slam/#1-build","title":"1) Build","text":"
        mkdir -p ~/catkin_fastlio_slam/src\n    cd ~/catkin_fastlio_slam/src\n    git clone https://github.com/gisbi-kim/FAST_LIO_SLAM.git\n    git clone https://github.com/Livox-SDK/livox_ros_driver\n    cd ..\n    catkin_make\n    source devel/setup.bash\n
    "},{"location":"how-to-guides/integrating-autoware/creating-maps/open-source-slam/fast-lio-slam/#2-set-parameters","title":"2) Set parameters","text":"
    • Set imu and lidar topic on Fast_LIO/config/ouster64.yaml
    "},{"location":"how-to-guides/integrating-autoware/creating-maps/open-source-slam/fast-lio-slam/#3-run","title":"3) Run","text":"
        # terminal 1: run FAST-LIO2\nroslaunch fast_lio mapping_ouster64.launch\n\n    # open the other terminal tab: run SC-PGO\ncd ~/catkin_fastlio_slam\n    source devel/setup.bash\n    roslaunch aloam_velodyne fastlio_ouster64.launch\n\n    # play bag file in the other terminal\nrosbag play xxx.bag -- clock --pause\n
    "},{"location":"how-to-guides/integrating-autoware/creating-maps/open-source-slam/fast-lio-slam/#example-result","title":"Example Result","text":""},{"location":"how-to-guides/integrating-autoware/creating-maps/open-source-slam/fast-lio-slam/#other-examples","title":"Other Examples","text":"
    • Tutorial video 1 (using KAIST 03 sequence of MulRan dataset)

      • Example result captures

      • download the KAIST 03 pcd map made by FAST-LIO-SLAM, 500MB
    • Example Video 2 (Riverside 02 sequence of MulRan dataset)
      • Example result captures

      • download the Riverside 02 pcd map made by FAST-LIO-SLAM, 400MB
    "},{"location":"how-to-guides/integrating-autoware/creating-maps/open-source-slam/fast-lio-slam/#acknowledgements","title":"Acknowledgements","text":"
    • Thanks for FAST_LIO authors.
    • You may have an interest in this version of FAST-LIO + Loop closure, implemented by yanliang-wang
    • Maintainer: Giseop Kim (paulgkim@kaist.ac.kr)
    "},{"location":"how-to-guides/integrating-autoware/creating-maps/open-source-slam/fd-slam/","title":"FD-SLAM","text":""},{"location":"how-to-guides/integrating-autoware/creating-maps/open-source-slam/fd-slam/#fd-slam","title":"FD-SLAM","text":""},{"location":"how-to-guides/integrating-autoware/creating-maps/open-source-slam/fd-slam/#what-is-fd-slam","title":"What is FD-SLAM?","text":"
    • FD_SLAM is Feature&Distribution-based 3D LiDAR SLAM method based on Surface Representation Refinement. In this algorithm novel feature-based Lidar odometry used for fast scan-matching, and used a proposed UGICP method for keyframe matching.
    "},{"location":"how-to-guides/integrating-autoware/creating-maps/open-source-slam/fd-slam/#repository-information","title":"Repository Information","text":"

    This is an open source ROS package for real-time 6DOF SLAM using a 3D LIDAR.

    It is based on hdl_graph_slam and the steps to run our system are same with hdl-graph-slam.

    "},{"location":"how-to-guides/integrating-autoware/creating-maps/open-source-slam/fd-slam/#original-repository-link","title":"Original Repository link","text":"

    https://github.com/SLAMWang/FD-SLAM

    "},{"location":"how-to-guides/integrating-autoware/creating-maps/open-source-slam/fd-slam/#required-sensors","title":"Required Sensors","text":"
    • LIDAR[VLP-16, HDL-32, HDL-64, OS1-64]
    • GPS
    • IMU [Optional]
    "},{"location":"how-to-guides/integrating-autoware/creating-maps/open-source-slam/fd-slam/#ros-compatibility","title":"ROS Compatibility","text":"
    • ROS 1
    "},{"location":"how-to-guides/integrating-autoware/creating-maps/open-source-slam/fd-slam/#dependencies","title":"Dependencies","text":"
    • ROS
    • PCL
    • g2o
    • Suitesparse

    The following ROS packages are required:

    • geodesy
    • nmea_msgs
    • pcl_ros
    • ndt_omp
    • U_gicp This is modified based on fast_gicp by us. We use UGICP for keyframe matching.
    "},{"location":"how-to-guides/integrating-autoware/creating-maps/open-source-slam/fd-slam/#build-run","title":"Build & Run","text":""},{"location":"how-to-guides/integrating-autoware/creating-maps/open-source-slam/fd-slam/#1-build","title":"1) Build","text":"
    cd ~/catkin_ws/src\ngit clone https://github.com/SLAMWang/FD-SLAM.git\ncd ..\ncatkin_make\n
    "},{"location":"how-to-guides/integrating-autoware/creating-maps/open-source-slam/fd-slam/#2-services","title":"2) Services","text":"
    /hdl_graph_slam/dump  (hdl_graph_slam/DumpGraph)\n- save all the internal data (point clouds, floor coeffs, odoms, and pose graph) to a directory.\n\n/hdl_graph_slam/save_map (hdl_graph_slam/SaveMap)\n- save the generated map as a PCD file.\n
    "},{"location":"how-to-guides/integrating-autoware/creating-maps/open-source-slam/fd-slam/#3-set-parameters","title":"3) Set parameters","text":"
    • All the configurable parameters are listed in launch/****.launch as ros params.
    "},{"location":"how-to-guides/integrating-autoware/creating-maps/open-source-slam/fd-slam/#4-run","title":"4) Run","text":"
    source devel/setup.bash\nroslaunch hdl_graph_slam hdl_graph_slam_400_ours.launch\n
    "},{"location":"how-to-guides/integrating-autoware/creating-maps/open-source-slam/hdl-graph-slam/","title":"hdl_graph_slam","text":""},{"location":"how-to-guides/integrating-autoware/creating-maps/open-source-slam/hdl-graph-slam/#hdl_graph_slam","title":"hdl_graph_slam","text":""},{"location":"how-to-guides/integrating-autoware/creating-maps/open-source-slam/hdl-graph-slam/#what-is-hdl_graph_slam","title":"What is hdl_graph_slam?","text":"
    • An open source ROS package for real-time 6DOF SLAM using a 3D LIDAR. It is based on 3D Graph SLAM with NDT scan matching-based odometry estimation and loop detection. It also supports several graph constraints, such as GPS, IMU acceleration (gravity vector), IMU orientation (magnetic sensor), and floor plane (detected in a point cloud).
    "},{"location":"how-to-guides/integrating-autoware/creating-maps/open-source-slam/hdl-graph-slam/#repository-information","title":"Repository Information","text":""},{"location":"how-to-guides/integrating-autoware/creating-maps/open-source-slam/hdl-graph-slam/#original-repository-link","title":"Original Repository link","text":"

    https://github.com/koide3/hdl_graph_slam

    "},{"location":"how-to-guides/integrating-autoware/creating-maps/open-source-slam/hdl-graph-slam/#required-sensors","title":"Required Sensors","text":"
    • LIDAR [Velodyne, Ouster, RoboSense]
    • IMU [6-AXIS, 9-AXIS] [OPTIONAL]
    • GPS [OPTIONAL]
    "},{"location":"how-to-guides/integrating-autoware/creating-maps/open-source-slam/hdl-graph-slam/#ros-compatibility","title":"ROS Compatibility","text":"
    • ROS 1
    "},{"location":"how-to-guides/integrating-autoware/creating-maps/open-source-slam/hdl-graph-slam/#dependencies","title":"Dependencies","text":"
    • ROS
    • PCL
    • g2o
    • OpenMP

    The following ROS packages are required:

    • geodesy
    • nmea_msgs
    • pcl_ros
    • ndt_omp
    • fast_gicp
    "},{"location":"how-to-guides/integrating-autoware/creating-maps/open-source-slam/hdl-graph-slam/#build-run","title":"Build & Run","text":""},{"location":"how-to-guides/integrating-autoware/creating-maps/open-source-slam/hdl-graph-slam/#1-build","title":"1) Build","text":"
    # for melodic\nsudo apt-get install ros-melodic-geodesy ros-melodic-pcl-ros ros-melodic-nmea-msgs ros-melodic-libg2o\ncd catkin_ws/src\ngit clone https://github.com/koide3/ndt_omp.git -b melodic\ngit clone https://github.com/SMRT-AIST/fast_gicp.git --recursive\ngit clone https://github.com/koide3/hdl_graph_slam\n\ncd .. && catkin_make -DCMAKE_BUILD_TYPE=Release\n\n# for noetic\nsudo apt-get install ros-noetic-geodesy ros-noetic-pcl-ros ros-noetic-nmea-msgs ros-noetic-libg2o\n\ncd catkin_ws/src\ngit clone https://github.com/koide3/ndt_omp.git\ngit clone https://github.com/SMRT-AIST/fast_gicp.git --recursive\ngit clone https://github.com/koide3/hdl_graph_slam\n\ncd .. && catkin_make -DCMAKE_BUILD_TYPE=Release\n
    "},{"location":"how-to-guides/integrating-autoware/creating-maps/open-source-slam/hdl-graph-slam/#2-set-parameter","title":"2) Set parameter","text":"
    • Set lidar topic on launch/hdl_graph_slam_400.launch
    • Set registration settings on launch/hdl_graph_slam_400.launch
    "},{"location":"how-to-guides/integrating-autoware/creating-maps/open-source-slam/hdl-graph-slam/#3-run","title":"3) Run","text":"
    rosparam set use_sim_time true\nroslaunch hdl_graph_slam hdl_graph_slam_400.launch\n
    roscd hdl_graph_slam/rviz\nrviz -d hdl_graph_slam.rviz\n
    rosbag play --clock hdl_400.bag\n

    Save the generated map by:

    rosservice call /hdl_graph_slam/save_map \"resolution: 0.05\ndestination: '/full_path_directory/map.pcd'\"\n
    "},{"location":"how-to-guides/integrating-autoware/creating-maps/open-source-slam/hdl-graph-slam/#example-result","title":"Example Result","text":""},{"location":"how-to-guides/integrating-autoware/creating-maps/open-source-slam/hdl-graph-slam/#example2-outdoor","title":"Example2 (Outdoor)","text":"

    Bag file (recorded in an outdoor environment):

    • hdl_400.bag.tar.gz (raw data, about 900MB)
    rosparam set use_sim_time true\nroslaunch hdl_graph_slam hdl_graph_slam_400.launch\n
    roscd hdl_graph_slam/rviz\nrviz -d hdl_graph_slam.rviz\n
    rosbag play --clock dataset.bag\n

    "},{"location":"how-to-guides/integrating-autoware/creating-maps/open-source-slam/hdl-graph-slam/#papers","title":"Papers","text":"

    Kenji Koide, Jun Miura, and Emanuele Menegatti, A Portable 3D LIDAR-based System for Long-term and Wide-area People Behavior Measurement, Advanced Robotic Systems, 2019 [link].

    "},{"location":"how-to-guides/integrating-autoware/creating-maps/open-source-slam/hdl-graph-slam/#contact","title":"Contact","text":"

    Kenji Koide, k.koide@aist.go.jp, https://staff.aist.go.jp/k.koide

    [Active Intelligent Systems Laboratory, Toyohashi University of Technology, Japan] [Mobile Robotics Research Team, National Institute of Advanced Industrial Science and Technology (AIST), Japan]

    "},{"location":"how-to-guides/integrating-autoware/creating-maps/open-source-slam/ia-lio-slam/","title":"IA-LIO-SAM","text":""},{"location":"how-to-guides/integrating-autoware/creating-maps/open-source-slam/ia-lio-slam/#ia-lio-sam","title":"IA-LIO-SAM","text":""},{"location":"how-to-guides/integrating-autoware/creating-maps/open-source-slam/ia-lio-slam/#what-is-ia-lio-sam","title":"What is IA-LIO-SAM?","text":"
    • IA_LIO_SLAM is created for data acquisition in unstructured environment and it is a framework for Intensity and Ambient Enhanced Lidar Inertial Odometry via Smoothing and Mapping that achieves highly accurate robot trajectories and mapping.
    "},{"location":"how-to-guides/integrating-autoware/creating-maps/open-source-slam/ia-lio-slam/#repository-information","title":"Repository Information","text":""},{"location":"how-to-guides/integrating-autoware/creating-maps/open-source-slam/ia-lio-slam/#original-repository-link","title":"Original Repository link","text":"

    https://github.com/minwoo0611/IA_LIO_SAM

    "},{"location":"how-to-guides/integrating-autoware/creating-maps/open-source-slam/ia-lio-slam/#required-sensors","title":"Required Sensors","text":"
    • LIDAR [Velodyne, Ouster]
    • IMU [9-AXIS]
    • GNSS
    "},{"location":"how-to-guides/integrating-autoware/creating-maps/open-source-slam/ia-lio-slam/#ros-compatibility","title":"ROS Compatibility","text":"
    • ROS 1
    "},{"location":"how-to-guides/integrating-autoware/creating-maps/open-source-slam/ia-lio-slam/#dependencies","title":"Dependencies","text":"
    • ROS (tested with Kinetic and Melodic)

      • for ROS melodic:

        sudo apt-get install -y ros-melodic-navigation\nsudo apt-get install -y ros-melodic-robot-localization\nsudo apt-get install -y ros-melodic-robot-state-publisher\n
      • for ROS kinetic:

        sudo apt-get install -y ros-kinetic-navigation\nsudo apt-get install -y ros-kinetic-robot-localization\nsudo apt-get install -y ros-kinetic-robot-state-publisher\n
    • GTSAM (Georgia Tech Smoothing and Mapping library)

      wget -O ~/Downloads/gtsam.zip https://github.com/borglab/gtsam/archive/4.0.2.zip\ncd ~/Downloads/ && unzip gtsam.zip -d ~/Downloads/\ncd ~/Downloads/gtsam-4.0.2/\nmkdir build && cd build\ncmake -DGTSAM_BUILD_WITH_MARCH_NATIVE=OFF ..\nsudo make install -j8\n
    "},{"location":"how-to-guides/integrating-autoware/creating-maps/open-source-slam/ia-lio-slam/#build-run","title":"Build & Run","text":""},{"location":"how-to-guides/integrating-autoware/creating-maps/open-source-slam/ia-lio-slam/#1-build","title":"1) Build","text":"
        mkdir -p ~/catkin_ia_lio/src\n    cd ~/catkin_ia_lio/src\n    git clone https://github.com/minwoo0611/IA_LIO_SAM\n    cd ..\n    catkin_make\n
    "},{"location":"how-to-guides/integrating-autoware/creating-maps/open-source-slam/ia-lio-slam/#2-set-parameters","title":"2) Set parameters","text":"
    • After downloading the repository, change topic and sensor settings on the config file (workspace/src/IA_LIO_SAM/config/params.yaml)
    • For imu-lidar compatibility, extrinsic matrices from calibration must be changed.
    • To enable autosave, savePCD must be true on the params.yaml file (workspace/src/IA_LIO_SAM/config/params.yaml).
    "},{"location":"how-to-guides/integrating-autoware/creating-maps/open-source-slam/ia-lio-slam/#3-run","title":"3) Run","text":"
      # open new terminal: run IA_LIO\n  source devel/setup.bash\n  roslaunch lio_sam mapping_ouster64.launch\n\n  # play bag file in the other terminal\n  rosbag play RECORDED_BAG.bag --clock\n
    "},{"location":"how-to-guides/integrating-autoware/creating-maps/open-source-slam/ia-lio-slam/#sample-dataset-images","title":"Sample dataset images","text":""},{"location":"how-to-guides/integrating-autoware/creating-maps/open-source-slam/ia-lio-slam/#example-dataset","title":"Example dataset","text":"

    Check original repo link for example dataset.

    "},{"location":"how-to-guides/integrating-autoware/creating-maps/open-source-slam/ia-lio-slam/#contact","title":"Contact","text":"
    • Maintainer: Kevin Jung (GitHub: minwoo0611)
    "},{"location":"how-to-guides/integrating-autoware/creating-maps/open-source-slam/ia-lio-slam/#paper","title":"Paper","text":"

    Thank you for citing IA-LIO-SAM(./config/doc/KRS-2021-17.pdf) if you use any of this code.

    Part of the code is adapted from LIO-SAM (IROS-2020).

    @inproceedings{legoloam2018shan,\n  title={LeGO-LOAM: Lightweight and Ground-Optimized Lidar Odometry and Mapping on Variable Terrain},\n  author={Shan, Tixiao and Englot, Brendan},\n  booktitle={IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)},\n  pages={4758-4765},\n  year={2018},\n  organization={IEEE}\n}\n
    "},{"location":"how-to-guides/integrating-autoware/creating-maps/open-source-slam/ia-lio-slam/#acknowledgements","title":"Acknowledgements","text":"
    • IA-LIO-SAM is based on LIO-SAM (T. Shan, B. Englot, D. Meyers, W. Wang, C. Ratti, and D. Rus. LIO-SAM: Tightly-coupled Lidar Inertial Odometry via Smoothing and Mapping).
    "},{"location":"how-to-guides/integrating-autoware/creating-maps/open-source-slam/iscloam/","title":"ISCLOAM","text":""},{"location":"how-to-guides/integrating-autoware/creating-maps/open-source-slam/iscloam/#iscloam","title":"ISCLOAM","text":""},{"location":"how-to-guides/integrating-autoware/creating-maps/open-source-slam/iscloam/#what-is-iscloam","title":"What is ISCLOAM?","text":"
    • ISCLOAM presents a robust loop closure detection approach by integrating both geometry and intensity information.
    "},{"location":"how-to-guides/integrating-autoware/creating-maps/open-source-slam/iscloam/#repository-information","title":"Repository Information","text":""},{"location":"how-to-guides/integrating-autoware/creating-maps/open-source-slam/iscloam/#original-repository-link","title":"Original Repository link","text":"

    https://github.com/wh200720041/iscloam

    "},{"location":"how-to-guides/integrating-autoware/creating-maps/open-source-slam/iscloam/#required-sensors","title":"Required Sensors","text":"
    • LIDAR [Velodyne]
    "},{"location":"how-to-guides/integrating-autoware/creating-maps/open-source-slam/iscloam/#ros-compatibility","title":"ROS Compatibility","text":"
    • ROS 1
    "},{"location":"how-to-guides/integrating-autoware/creating-maps/open-source-slam/iscloam/#dependencies","title":"Dependencies","text":"
    • Ubuntu 64-bit 18.04
    • ROS Melodic ROS Installation
    • Ceres Solver Ceres Installation
    • PCL PCL Installation
    • Gtsam GTSAM Installation
    • OpenCV OPENCV Installation
    • Trajectory visualization

    For visualization purpose, this package uses hector trajectory sever, you may install the package by

    sudo apt-get install ros-melodic-hector-trajectory-server\n
    "},{"location":"how-to-guides/integrating-autoware/creating-maps/open-source-slam/iscloam/#build-and-run","title":"Build and Run","text":""},{"location":"how-to-guides/integrating-autoware/creating-maps/open-source-slam/iscloam/#1-clone-repository","title":"1. Clone repository","text":"
    cd ~/catkin_ws/src\ngit clone https://github.com/wh200720041/iscloam.git\ncd ..\ncatkin_make -j1\nsource ~/catkin_ws/devel/setup.bash\n
    "},{"location":"how-to-guides/integrating-autoware/creating-maps/open-source-slam/iscloam/#2-set-parameter","title":"2. Set Parameter","text":"

    Change the bag location and sensor parameters on launch files.

    "},{"location":"how-to-guides/integrating-autoware/creating-maps/open-source-slam/iscloam/#3-launch","title":"3. Launch","text":"
    roslaunch iscloam iscloam.launch\n

    if you would like to generate the map of environment at the same time, you can run

    roslaunch iscloam iscloam_mapping.launch\n

    Note that the global map can be very large, so it may takes a while to perform global optimization, some lag is expected between trajectory and map since they are running in separate thread. More CPU usage will happen when loop closure is identified.

    "},{"location":"how-to-guides/integrating-autoware/creating-maps/open-source-slam/iscloam/#example-result","title":"Example Result","text":"

    Watch demo video at Video Link

    "},{"location":"how-to-guides/integrating-autoware/creating-maps/open-source-slam/iscloam/#ground-truth-comparison","title":"Ground Truth Comparison","text":"

    Green: ISCLOAM Red: Ground Truth

                      KITTI sequence 00                                  KITTI sequence 05\n
    "},{"location":"how-to-guides/integrating-autoware/creating-maps/open-source-slam/iscloam/#citation","title":"Citation","text":"

    If you use this work for your research, you may want to cite the paper below, your citation will be appreciated

    @inproceedings{wang2020intensity,\n  author={H. {Wang} and C. {Wang} and L. {Xie}},\n  booktitle={2020 IEEE International Conference on Robotics and Automation (ICRA)},\n  title={Intensity Scan Context: Coding Intensity and Geometry Relations for Loop Closure Detection},\n  year={2020},\n  volume={},\n  number={},\n  pages={2095-2101},\n  doi={10.1109/ICRA40945.2020.9196764}\n}\n
    "},{"location":"how-to-guides/integrating-autoware/creating-maps/open-source-slam/iscloam/#acknowledgements","title":"Acknowledgements","text":"

    Thanks for A-LOAM and LOAM(J. Zhang and S. Singh. LOAM: Lidar Odometry and Mapping in Real-time) and LOAM_NOTED.

    Author: Wang Han, Nanyang Technological University, Singapore

    "},{"location":"how-to-guides/integrating-autoware/creating-maps/open-source-slam/lego-loam-bor/","title":"LeGO-LOAM-BOR","text":""},{"location":"how-to-guides/integrating-autoware/creating-maps/open-source-slam/lego-loam-bor/#lego-loam-bor","title":"LeGO-LOAM-BOR","text":""},{"location":"how-to-guides/integrating-autoware/creating-maps/open-source-slam/lego-loam-bor/#what-is-lego-loam-bor","title":"What is LeGO-LOAM-BOR?","text":"
    • LeGO-LOAM-BOR is improved version of the LeGO-LOAM by improving quality of the code, making it more readable and consistent. Also, performance is improved by converting processes to multi-threaded approach.
    "},{"location":"how-to-guides/integrating-autoware/creating-maps/open-source-slam/lego-loam-bor/#repository-information","title":"Repository Information","text":""},{"location":"how-to-guides/integrating-autoware/creating-maps/open-source-slam/lego-loam-bor/#original-repository-link","title":"Original Repository link","text":"

    https://github.com/facontidavide/LeGO-LOAM-BOR

    "},{"location":"how-to-guides/integrating-autoware/creating-maps/open-source-slam/lego-loam-bor/#required-sensors","title":"Required Sensors","text":"
    • LIDAR [VLP-16]
    • IMU [9-AXIS]
    "},{"location":"how-to-guides/integrating-autoware/creating-maps/open-source-slam/lego-loam-bor/#ros-compatibility","title":"ROS Compatibility","text":"
    • ROS 1
    "},{"location":"how-to-guides/integrating-autoware/creating-maps/open-source-slam/lego-loam-bor/#dependencies","title":"Dependencies","text":"
    • ROS Melodic ROS Installation
    • PCL PCL Installation
    • Gtsam GTSAM Installation
    wget -O ~/Downloads/gtsam.zip https://github.com/borglab/gtsam/archive/4.0.0-alpha2.zip\ncd ~/Downloads/ && unzip gtsam.zip -d ~/Downloads/\ncd ~/Downloads/gtsam-4.0.0-alpha2/\nmkdir build && cd build\ncmake ..\nsudo make install\n
    "},{"location":"how-to-guides/integrating-autoware/creating-maps/open-source-slam/lego-loam-bor/#build-run","title":"Build & Run","text":""},{"location":"how-to-guides/integrating-autoware/creating-maps/open-source-slam/lego-loam-bor/#1-build","title":"1) Build","text":"
    cd ~/catkin_ws/src\ngit clone https://github.com/facontidavide/LeGO-LOAM-BOR.git\ncd ..\ncatkin_make\n
    "},{"location":"how-to-guides/integrating-autoware/creating-maps/open-source-slam/lego-loam-bor/#2-set-parameters","title":"2) Set parameters","text":"
    • Set parameters on LeGo-LOAM/loam_config.yaml
    "},{"location":"how-to-guides/integrating-autoware/creating-maps/open-source-slam/lego-loam-bor/#3-run","title":"3) Run","text":"
    source devel/setup.bash\nroslaunch lego_loam_bor run.launch rosbag:=/path/to/your/rosbag lidar_topic:=/velodyne_points\n
    "},{"location":"how-to-guides/integrating-autoware/creating-maps/open-source-slam/lego-loam-bor/#example-result","title":"Example Result","text":""},{"location":"how-to-guides/integrating-autoware/creating-maps/open-source-slam/lego-loam-bor/#cite-lego-loam","title":"Cite LeGO-LOAM","text":"

    Thank you for citing our LeGO-LOAM paper if you use any of this code:

    @inproceedings{legoloam2018,\n  title={LeGO-LOAM: Lightweight and Ground-Optimized Lidar Odometry and Mapping on Variable Terrain},\n  author={Tixiao Shan and Brendan Englot},\n  booktitle={IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)},\n  pages={4758-4765},\n  year={2018},\n  organization={IEEE}\n}\n
    "},{"location":"how-to-guides/integrating-autoware/creating-maps/open-source-slam/lio-sam/","title":"LIO-SAM","text":""},{"location":"how-to-guides/integrating-autoware/creating-maps/open-source-slam/lio-sam/#lio-sam","title":"LIO-SAM","text":""},{"location":"how-to-guides/integrating-autoware/creating-maps/open-source-slam/lio-sam/#what-is-lio-sam","title":"What is LIO-SAM?","text":"
    • A framework that achieves highly accurate, real-time mobile robot trajectory estimation and map-building. It formulates lidar-inertial odometry atop a factor graph, allowing a multitude of relative and absolute measurements, including loop closures, to be incorporated from different sources as factors into the system.
    "},{"location":"how-to-guides/integrating-autoware/creating-maps/open-source-slam/lio-sam/#repository-information","title":"Repository Information","text":""},{"location":"how-to-guides/integrating-autoware/creating-maps/open-source-slam/lio-sam/#original-repository-link","title":"Original Repository link","text":"

    https://github.com/TixiaoShan/LIO-SAM

    "},{"location":"how-to-guides/integrating-autoware/creating-maps/open-source-slam/lio-sam/#required-sensors","title":"Required Sensors","text":"
    • LIDAR [Livox, Velodyne, Ouster, Robosense*]
    • IMU [9-AXIS]
    • GPS [OPTIONAL]

    *Robosense lidars aren't supported officially, but their Helios series can be used as Velodyne lidars.

    The system architecture of LIO-SAM method described in the following diagram, please look at the official repository for getting more information.

    System Architecture of LIO-SAM

    We are using Robosense Helios 5515 and CLAP B7 sensor on tutorial_vehicle, so we will use these sensors for running LIO-SAM.

    Additionally, LIO-SAM tested with Applanix POS LVX and Hesai Pandar XT32 sensor setup. Some additional information according to the sensors will be provided in this page.

    "},{"location":"how-to-guides/integrating-autoware/creating-maps/open-source-slam/lio-sam/#ros-compatibility","title":"ROS Compatibility","text":"

    Since Autoware uses ROS 2 Humble currently, we will continue with ROS 2 version of LIO-SAM.

    • ROS
    • ROS 2 (Also, it is compatible with Humble distro)
    "},{"location":"how-to-guides/integrating-autoware/creating-maps/open-source-slam/lio-sam/#dependencies","title":"Dependencies","text":"

    ROS 2 dependencies:

    • perception-pcl
    • pcl-msgs
    • vision-opencv
    • xacro

    To install these dependencies, you can use this bash command in your terminal:

    sudo apt install ros-humble-perception-pcl \\\nros-humble-pcl-msgs \\\nros-humble-vision-opencv \\\nros-humble-xacro\n

    Other dependencies:

    • gtsam (Georgia Tech Smoothing and Mapping library)

    To install the gtsam, you can use this bash command in your terminal:

      # Add GTSAM-PPA\nsudo add-apt-repository ppa:borglab/gtsam-release-4.1\n  sudo apt install libgtsam-dev libgtsam-unstable-dev\n
    "},{"location":"how-to-guides/integrating-autoware/creating-maps/open-source-slam/lio-sam/#build-run","title":"Build & Run","text":""},{"location":"how-to-guides/integrating-autoware/creating-maps/open-source-slam/lio-sam/#1-installation","title":"1) Installation","text":"

    In order to use and build LIO-SAM, we will create workspace for LIO-SAM:

        mkdir -p ~/lio-sam-ws/src\n    cd ~/lio-sam-ws/src\n    git clone -b ros2 https://github.com/TixiaoShan/LIO-SAM.git\n    cd ..\n    colcon build --symlink-install --cmake-args -DCMAKE_BUILD_TYPE=Release\n
    "},{"location":"how-to-guides/integrating-autoware/creating-maps/open-source-slam/lio-sam/#2-settings","title":"2) Settings","text":"

    After the building of LIO-SAM, we need to record ROS 2 Bag file with including necessary topics for LIO-SAM. The necessary topics are described in the config file on LIO-SAM.

    ROS 2 Bag example for LIO-SAM with Robosense Helios and CLAP B7
    Files:             map_bag_13_09_0.db3\nBag size:          38.4 GiB\nStorage id:        sqlite3\nDuration:          3295.326s\nStart:             Sep 13 2023 16:40:23.165 (1694612423.165)\nEnd:               Sep 13 2023 17:35:18.492 (1694615718.492)\nMessages:          1627025\nTopic information: Topic: /sensing/gnss/clap/ros/imu | Type: sensor_msgs/msg/Imu | Count: 329535 | Serialization Format: cdr\nTopic: /sensing/gnss/clap/ros/odometry | Type: nav_msgs/msg/Odometry | Count: 329533 | Serialization Format: cdr\nTopic: /sensing/lidar/top/pointcloud_raw | Type: sensor_msgs/msg/PointCloud2 | Count: 32953 | Serialization Format: cdr\n

    Note: We use use_odometry as true at clap_b7_driver for publishing GPS odometry topic from navsatfix.

    Please set topics and sensor settings on lio_sam/config/params.yaml. Here are some example modifications for out tutorial_vehicle.

    • Topic names:
    -   pointCloudTopic: \"/points\"\n+   pointCloudTopic: \"/sensing/lidar/top/pointcloud_raw\"\n-   imuTopic: \"/imu/data\"\n+   imuTopic: \"/sensing/gnss/clap/ros/imu\"\n   odomTopic: \"odometry/imu\"\n-   gpsTopic: \"odometry/gpsz\"\n+   gpsTopic: \"/sensing/gnss/clap/ros/odometry\"\n

    Since we will use GPS information with Autoware, so we need to enable useImuHeadingInitialization parameter.

    • GPS settings:
    -   useImuHeadingInitialization: false\n+   useImuHeadingInitialization: true\n-   useGpsElevation: false\n+   useGpsElevation: true\n

    We will update sensor settings also. Since Robosense Lidars aren't officially supported, we will set our 32-channel Robosense Helios 5515 lidar as Velodyne:

    • Sensor settings:
    -   sensor: ouster\n+   sensor: velodyne\n-   N_SCAN: 64\n+   N_SCAN: 32\n-   Horizon_SCAN: 512\n+   Horizon_SCAN: 1800\n

    After that, we will update extrinsic transformations between Robosense Lidar and CLAP B7 GNSS/INS (IMU) system.

    • Extrinsic transformation:
    -   extrinsicTrans:  [ 0.0,  0.0,  0.0 ]\n+   extrinsicTrans:  [-0.91, 0.0, -1.71]\n-   extrinsicRot:    [-1.0,  0.0,  0.0,\n-                      0.0,  1.0,  0.0,\n-                      0.0,  0.0, -1.0 ]\n+   extrinsicRot:    [1.0,  0.0,  0.0,\n+                     0.0,  1.0,  0.0,\n+                     0.0,  0.0, 1.0 ]\n-   extrinsicRPY: [ 0.0,  1.0,  0.0,\n-                  -1.0,  0.0,  0.0,\n-                   0.0,  0.0,  1.0 ]\n+   extrinsicRPY: [ 1.0,  0.0,  0.0,\n+                   0.0,  1.0,  0.0,\n+                   0.0,  0.0,  1.0 ]\n

    Warning

    The mapping direction is towards to the going direction in the real world. If LiDAR sensor is backwards, according to the direction you are moving, then you need to change the extrinsicRot too. Unless the IMU tries to go in the wrong direction, and it may occur problems.

    For example, in our Applanix POS LVX and Hesai Pandar XT32 setup, IMU direction was towards to the going direction and LiDAR direction has 180 degree difference in Z-axis according to the IMU direction. In other words, they were facing back to each other. The tool may need a transformation for IMU for that.

    • In that situation, the calibration parameters changed as this:
    -   extrinsicRot:    [-1.0,  0.0,  0.0,\n-                      0.0,  1.0,  0.0,\n-                      0.0,  0.0, -1.0 ]\n+   extrinsicRot:    [-1.0,  0.0,  0.0,\n+                     0.0,  -1.0,  0.0,\n+                     0.0,   0.0,  1.0 ]\n-   extrinsicRPY: [ 0.0,  1.0,  0.0,\n-                  -1.0,  0.0,  0.0,\n-                   0.0,  0.0,  1.0 ]\n+   extrinsicRPY: [ -1.0,  0.0,  0.0,\n+                    0.0, -1.0,  0.0,\n+                    0.0,  0.0,  1.0 ]\n
    • In the end, we got this transform visualization in RViz:

    Transform Visualization of Applanix POS LVX and Hesai Pandar XT32 in RViz

    Now, we are ready to create a map for Autoware.

    "},{"location":"how-to-guides/integrating-autoware/creating-maps/open-source-slam/lio-sam/#3-usage","title":"3) Usage","text":"

    If you are set configurations and create bag file for LIO-SAM, you can launch LIO-SAM with:

    ros2 launch lio_sam run.launch.py\n

    The rviz2 screen will be open, then you can play your bag file:

    ros2 bag play <YOUR-BAG-FILE>\n

    If the mapping process is finished, you can save map with calling this service:

    ros2 service call /lio_sam/save_map lio_sam/srv/SaveMap \"{resolution: 0.2, destination: <YOUR-MAP-DIRECTORY>}\"\n

    Here is the video for demonstration of LIO-SAM mapping in our campus environment:

    The output map format is local UTM, we will change local UTM map to MGRS format for tutorial_vehicle. Also, if you want change UTM to MGRS for autoware, please follow convert-utm-to-mgrs-map page.

    "},{"location":"how-to-guides/integrating-autoware/creating-maps/open-source-slam/lio-sam/#example-result","title":"Example Result","text":"Sample Map Output for our Campus Environment"},{"location":"how-to-guides/integrating-autoware/creating-maps/open-source-slam/lio-sam/#paper","title":"Paper","text":"

    Thank you for citing LIO-SAM (IROS-2020) if you use any of this code.

    @inproceedings{liosam2020shan,\n  title={LIO-SAM: Tightly-coupled Lidar Inertial Odometry via Smoothing and Mapping},\n  author={Shan, Tixiao and Englot, Brendan and Meyers, Drew and Wang, Wei and Ratti, Carlo and Rus Daniela},\n  booktitle={IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)},\n  pages={5135-5142},\n  year={2020},\n  organization={IEEE}\n}\n

    Part of the code is adapted from LeGO-LOAM.

    @inproceedings{legoloam2018shan,\n  title={LeGO-LOAM: Lightweight and Ground-Optimized Lidar Odometry and Mapping on Variable Terrain},\n  author={Shan, Tixiao and Englot, Brendan},\n  booktitle={IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)},\n  pages={4758-4765},\n  year={2018},\n  organization={IEEE}\n}\n
    "},{"location":"how-to-guides/integrating-autoware/creating-maps/open-source-slam/lio-sam/#acknowledgements","title":"Acknowledgements","text":"
    • LIO-SAM is based on LOAM (J. Zhang and S. Singh. LOAM: Lidar Odometry and Mapping in Real-time).
    "},{"location":"how-to-guides/integrating-autoware/creating-maps/open-source-slam/optimized-sc-f-loam/","title":"Optimized-SC-F-LOAM","text":""},{"location":"how-to-guides/integrating-autoware/creating-maps/open-source-slam/optimized-sc-f-loam/#optimized-sc-f-loam","title":"Optimized-SC-F-LOAM","text":""},{"location":"how-to-guides/integrating-autoware/creating-maps/open-source-slam/optimized-sc-f-loam/#what-is-optimized-sc-f-loam","title":"What is Optimized-SC-F-LOAM?","text":"
    • An improved version of F-LOAM and uses an adaptive threshold to further judge the loop closure detection results and reducing false loop closure detections. Also it uses feature point-based matching to calculate the constraints between a pair of loop closure frame point clouds and decreases time consumption of constructing loop frame constraints.
    "},{"location":"how-to-guides/integrating-autoware/creating-maps/open-source-slam/optimized-sc-f-loam/#repository-information","title":"Repository Information","text":""},{"location":"how-to-guides/integrating-autoware/creating-maps/open-source-slam/optimized-sc-f-loam/#original-repository-link","title":"Original Repository link","text":"

    https://github.com/SlamCabbage/Optimized-SC-F-LOAM

    "},{"location":"how-to-guides/integrating-autoware/creating-maps/open-source-slam/optimized-sc-f-loam/#required-sensors","title":"Required Sensors","text":"
    • LIDAR [VLP-16, HDL-32, HDL-64]
    "},{"location":"how-to-guides/integrating-autoware/creating-maps/open-source-slam/optimized-sc-f-loam/#ros-compatibility","title":"ROS Compatibility","text":"
    • ROS 1
    "},{"location":"how-to-guides/integrating-autoware/creating-maps/open-source-slam/optimized-sc-f-loam/#dependencies","title":"Dependencies","text":"
    • ROS
    • PCL
    • GTSAM
    • Ceres Solver
    • For visualization purpose, this package uses hector trajectory sever, you may install the package by
    sudo apt-get install ros-noetic-hector-trajectory-server\n
    "},{"location":"how-to-guides/integrating-autoware/creating-maps/open-source-slam/optimized-sc-f-loam/#build-run","title":"Build & Run","text":""},{"location":"how-to-guides/integrating-autoware/creating-maps/open-source-slam/optimized-sc-f-loam/#1-build","title":"1) Build","text":"
    cd ~/catkin_ws/src\ngit clone https://github.com/SlamCabbage/Optimized-SC-F-LOAM.git\ncd ..\ncatkin_make\n
    "},{"location":"how-to-guides/integrating-autoware/creating-maps/open-source-slam/optimized-sc-f-loam/#2-create-message-file","title":"2) Create message file","text":"

    In this folder, Ground Truth information, optimized pose information, F-LOAM pose information and time information are stored

    mkdir -p ~/message/Scans\n\nChange line 383 in the laserLoopOptimizationNode.cpp to your own \"message\" folder path\n

    (Do not forget to rebuild your package)

    "},{"location":"how-to-guides/integrating-autoware/creating-maps/open-source-slam/optimized-sc-f-loam/#3-set-parameters","title":"3) Set parameters","text":"
    • Set LIDAR topic and LIDAR properties on 'sc_f_loam_mapping.launch'
    "},{"location":"how-to-guides/integrating-autoware/creating-maps/open-source-slam/optimized-sc-f-loam/#4-run","title":"4) Run","text":"
    source devel/setup.bash\nroslaunch optimized_sc_f_loam optimized_sc_f_loam_mapping.launch\n
    "},{"location":"how-to-guides/integrating-autoware/creating-maps/open-source-slam/optimized-sc-f-loam/#example-result","title":"Example Result","text":""},{"location":"how-to-guides/integrating-autoware/creating-maps/open-source-slam/optimized-sc-f-loam/#results-on-kitti-sequence-00-and-sequence-05","title":"Results on KITTI Sequence 00 and Sequence 05","text":""},{"location":"how-to-guides/integrating-autoware/creating-maps/open-source-slam/optimized-sc-f-loam/#comparison-of-trajectories-on-kitti-dataset","title":"Comparison of trajectories on KITTI dataset","text":"

    Test on KITTI sequence You can download the sequence 00 and 05 datasets from the KITTI official website and convert them into bag files using the kitti2bag open source method.

    00: 2011_10_03_drive_0027 000000 004540

    05: 2011_09_30_drive_0018 000000 002760

    See the link: https://github.com/ethz-asl/kitti_to_rosbag

    "},{"location":"how-to-guides/integrating-autoware/creating-maps/open-source-slam/optimized-sc-f-loam/#acknowledgements","title":"Acknowledgements","text":"

    Thanks for SC-A-LOAM(Scan context: Egocentric spatial descriptor for place recognition within 3d point cloud map) and F-LOAM(F-LOAM : Fast LiDAR Odometry and Mapping).

    "},{"location":"how-to-guides/integrating-autoware/creating-maps/open-source-slam/optimized-sc-f-loam/#citation","title":"Citation","text":"
    @misc{https://doi.org/10.48550/arxiv.2204.04932,\n  doi = {10.48550/ARXIV.2204.04932},\n\n  url = {https://arxiv.org/abs/2204.04932},\n\n  author = {Liao, Lizhou and Fu, Chunyun and Feng, Binbin and Su, Tian},\n\n  keywords = {Robotics (cs.RO), FOS: Computer and information sciences, FOS: Computer and information sciences},\n\n  title = {Optimized SC-F-LOAM: Optimized Fast LiDAR Odometry and Mapping Using Scan Context},\n\n  publisher = {arXiv},\n\n  year = {2022},\n\n  copyright = {arXiv.org perpetual, non-exclusive license}\n}\n
    "},{"location":"how-to-guides/integrating-autoware/creating-maps/open-source-slam/sc-a-loam/","title":"SC-A-LOAM","text":""},{"location":"how-to-guides/integrating-autoware/creating-maps/open-source-slam/sc-a-loam/#sc-a-loam","title":"SC-A-LOAM","text":""},{"location":"how-to-guides/integrating-autoware/creating-maps/open-source-slam/sc-a-loam/#what-is-sc-a-loam","title":"What is SC-A-LOAM?","text":"
    • A real-time LiDAR SLAM package that integrates A-LOAM and ScanContext.
    "},{"location":"how-to-guides/integrating-autoware/creating-maps/open-source-slam/sc-a-loam/#repository-information","title":"Repository Information","text":""},{"location":"how-to-guides/integrating-autoware/creating-maps/open-source-slam/sc-a-loam/#original-repository-link","title":"Original Repository link","text":"

    https://github.com/gisbi-kim/SC-A-LOAM

    "},{"location":"how-to-guides/integrating-autoware/creating-maps/open-source-slam/sc-a-loam/#required-sensors","title":"Required Sensors","text":"
    • LIDAR [VLP-16, HDL-32, HDL-64, Ouster OS1-64]
    "},{"location":"how-to-guides/integrating-autoware/creating-maps/open-source-slam/sc-a-loam/#prerequisites-dependencies","title":"Prerequisites (dependencies)","text":"
    • ROS
    • GTSAM version 4.x.
    • If GTSAM is not installed, follow the steps below.

        wget -O ~/Downloads/gtsam.zip https://github.com/borglab/gtsam/archive/4.0.2.zip\n  cd ~/Downloads/ && unzip gtsam.zip -d ~/Downloads/\n  cd ~/Downloads/gtsam-4.0.2/\n  mkdir build && cd build\n  cmake -DGTSAM_BUILD_WITH_MARCH_NATIVE=OFF ..\n  sudo make install -j8\n
    "},{"location":"how-to-guides/integrating-autoware/creating-maps/open-source-slam/sc-a-loam/#ros-compatibility","title":"ROS Compatibility","text":"
    • ROS 1
    "},{"location":"how-to-guides/integrating-autoware/creating-maps/open-source-slam/sc-a-loam/#build-run","title":"Build & Run","text":""},{"location":"how-to-guides/integrating-autoware/creating-maps/open-source-slam/sc-a-loam/#1-build","title":"1) Build","text":"
    • First, install the above mentioned dependencies and follow below lines.

       mkdir -p ~/catkin_scaloam_ws/src\n cd ~/catkin_scaloam_ws/src\n git clone https://github.com/gisbi-kim/SC-A-LOAM.git\n cd ../\n catkin_make\n source ~/catkin_scaloam_ws/devel/setup.bash\n
    "},{"location":"how-to-guides/integrating-autoware/creating-maps/open-source-slam/sc-a-loam/#2-set-parameters","title":"2) Set parameters","text":"
    • After downloading the repository, change topic and sensor settings on the launch files.
    "},{"location":"how-to-guides/integrating-autoware/creating-maps/open-source-slam/sc-a-loam/#scan-context-parameters","title":"Scan Context parameters","text":"
    • If encountering ghosting error or loop is not closed, change the scan context parameters.
    • Adjust the scan context settings with the parameters in the marked area.
    "},{"location":"how-to-guides/integrating-autoware/creating-maps/open-source-slam/sc-a-loam/#3-run","title":"3) Run","text":"
    roslaunch aloam_velodyne aloam_mulran.launch\n
    "},{"location":"how-to-guides/integrating-autoware/creating-maps/open-source-slam/sc-a-loam/#4-saving-as-pcd-file","title":"4) Saving as PCD file","text":"
      rosrun pcl_ros pointcloud_to_pcd input:=/aft_pgo_map\n
    "},{"location":"how-to-guides/integrating-autoware/creating-maps/open-source-slam/sc-a-loam/#example-results","title":"Example Results","text":""},{"location":"how-to-guides/integrating-autoware/creating-maps/open-source-slam/sc-a-loam/#riverside-01-mulran-dataset","title":"Riverside 01, MulRan dataset","text":"
    • The MulRan dataset provides lidar scans (Ouster OS1-64, horizontally mounted, 10Hz) and consumer level gps (u-blox EVK-7P, 4Hz) data.
    • About how to use (publishing data) data: see here https://github.com/irapkaist/file_player_mulran
    • example videos on Riverside 01 sequence.

      1. with consumer level GPS-based altitude stabilization: https://youtu.be/FwAVX5TVm04\n2. without the z stabilization: https://youtu.be/okML_zNadhY\n
    • example result:

    "},{"location":"how-to-guides/integrating-autoware/creating-maps/open-source-slam/sc-a-loam/#kitti-05","title":"KITTI 05","text":"
    • For KITTI (HDL-64 sensor), run using the command

      roslaunch aloam_velodyne aloam_velodyne_HDL_64.launch # for KITTI dataset setting\n
    • To publish KITTI scans, you can use mini-kitti publisher, a simple python script: https://github.com/gisbi-kim/mini-kitti-publisher
    • example video (no GPS used here): https://youtu.be/hk3Xx8SKkv4
    • example result:

    "},{"location":"how-to-guides/integrating-autoware/creating-maps/open-source-slam/sc-a-loam/#contact","title":"Contact","text":"
    • Maintainer: paulgkim@kaist.ac.kr
    "},{"location":"how-to-guides/integrating-autoware/creating-maps/open-source-slam/sc-lego-loam/","title":"SC-LeGO-LOAM","text":""},{"location":"how-to-guides/integrating-autoware/creating-maps/open-source-slam/sc-lego-loam/#sc-lego-loam","title":"SC-LeGO-LOAM","text":""},{"location":"how-to-guides/integrating-autoware/creating-maps/open-source-slam/sc-lego-loam/#what-is-sc-lego-loam","title":"What is SC-LeGO-LOAM?","text":"
    • SC-LeGO-LOAM integrated LeGO-LOAM for lidar odometry and 2 different loop closure methods: ScanContext and Radius search based loop closure. While ScanContext is correcting large drifts, radius search based method is good for fine-stitching.
    "},{"location":"how-to-guides/integrating-autoware/creating-maps/open-source-slam/sc-lego-loam/#repository-information","title":"Repository Information","text":""},{"location":"how-to-guides/integrating-autoware/creating-maps/open-source-slam/sc-lego-loam/#original-repository-link","title":"Original Repository link","text":"

    https://github.com/irapkaist/SC-LeGO-LOAM

    "},{"location":"how-to-guides/integrating-autoware/creating-maps/open-source-slam/sc-lego-loam/#required-sensors","title":"Required Sensors","text":"
    • LIDAR [VLP-16, HDL-32E, VLS-128, Ouster OS1-16, Ouster OS1-64]
    • IMU [9-AXIS]
    "},{"location":"how-to-guides/integrating-autoware/creating-maps/open-source-slam/sc-lego-loam/#ros-compatibility","title":"ROS Compatibility","text":"
    • ROS 1
    "},{"location":"how-to-guides/integrating-autoware/creating-maps/open-source-slam/sc-lego-loam/#dependencies","title":"Dependencies","text":"
    • ROS
    • PCL
    • GTSAM
    wget -O ~/Downloads/gtsam.zip https://github.com/borglab/gtsam/archive/4.0.0-alpha2.zip\ncd ~/Downloads/ && unzip gtsam.zip -d ~/Downloads/\ncd ~/Downloads/gtsam-4.0.0-alpha2/\nmkdir build && cd build\ncmake ..\nsudo make install\n
    "},{"location":"how-to-guides/integrating-autoware/creating-maps/open-source-slam/sc-lego-loam/#build-run","title":"Build & Run","text":""},{"location":"how-to-guides/integrating-autoware/creating-maps/open-source-slam/sc-lego-loam/#1-build","title":"1) Build","text":"
    cd ~/catkin_ws/src\ngit clone https://github.com/irapkaist/SC-LeGO-LOAM.git\ncd ..\ncatkin_make\n
    "},{"location":"how-to-guides/integrating-autoware/creating-maps/open-source-slam/sc-lego-loam/#2-set-parameters","title":"2) Set parameters","text":"
    • Set imu and lidar topic on include/utility.h
    • Set lidar properties on include/utility.h
    • Set scan context settings on include/Scancontext.h

    (Do not forget to rebuild after setting parameters.)

    "},{"location":"how-to-guides/integrating-autoware/creating-maps/open-source-slam/sc-lego-loam/#3-run","title":"3) Run","text":"
    source devel/setup.bash\nroslaunch lego_loam run.launch\n
    "},{"location":"how-to-guides/integrating-autoware/creating-maps/open-source-slam/sc-lego-loam/#example-result","title":"Example Result","text":""},{"location":"how-to-guides/integrating-autoware/creating-maps/open-source-slam/sc-lego-loam/#other-examples","title":"Other Examples","text":"
    • Video 1: DCC (MulRan dataset)
    • Video 2: Riverside (MulRan dataset)
    • Video 3: KAIST (MulRan dataset)

    "},{"location":"how-to-guides/integrating-autoware/creating-maps/open-source-slam/sc-lego-loam/#mulran-dataset","title":"MulRan dataset","text":"
    • If you want to reproduce the results as the above video, you can download the MulRan dataset and use the ROS topic publishing tool .
    "},{"location":"how-to-guides/integrating-autoware/creating-maps/open-source-slam/sc-lego-loam/#cite-sc-lego-loam","title":"Cite SC-LeGO-LOAM","text":"
    @INPROCEEDINGS { gkim-2018-iros,\n  author = {Kim, Giseop and Kim, Ayoung},\n  title = { Scan Context: Egocentric Spatial Descriptor for Place Recognition within {3D} Point Cloud Map },\n  booktitle = { Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems },\n  year = { 2018 },\n  month = { Oct. },\n  address = { Madrid }\n}\n

    and

    @inproceedings{legoloam2018,\n  title={LeGO-LOAM: Lightweight and Ground-Optimized Lidar Odometry and Mapping on Variable Terrain},\n  author={Shan, Tixiao and Englot, Brendan},\n  booktitle={IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)},\n  pages={4758-4765},\n  year={2018},\n  organization={IEEE}\n}\n
    "},{"location":"how-to-guides/integrating-autoware/creating-maps/open-source-slam/sc-lego-loam/#contact","title":"Contact","text":"
    • Maintainer: Giseop Kim (paulgkim@kaist.ac.kr)
    "},{"location":"how-to-guides/integrating-autoware/creating-maps/pointcloud-map-downsampling/","title":"Pointcloud map downsampling","text":""},{"location":"how-to-guides/integrating-autoware/creating-maps/pointcloud-map-downsampling/#pointcloud-map-downsampling","title":"Pointcloud map downsampling","text":""},{"location":"how-to-guides/integrating-autoware/creating-maps/pointcloud-map-downsampling/#overview","title":"Overview","text":"

    When your created point cloud map is either too dense or too large (i.e., exceeding 300 MB), you may want to downsample it for improved computational and memory efficiency. Also, you can consider using dynamic map loading with partial loading, please check autoware_map_loader package for more information.

    At tutorial_vehicle implementation we will use the whole map, so we will downsample it with using CloudCompare.

    "},{"location":"how-to-guides/integrating-autoware/creating-maps/pointcloud-map-downsampling/#installing-cloudcompare","title":"Installing CloudCompare","text":"

    You can install it by snap:

    sudo snap install cloudcompare\n

    Please check the official page for installing options.

    "},{"location":"how-to-guides/integrating-autoware/creating-maps/pointcloud-map-downsampling/#downsampling-a-pointcloud-map","title":"Downsampling a pointcloud map","text":"

    There are three subsampling methods on CloudCompare, we are using Space method for subsampling, but you can use other methods if you want.

    1. Please open CloudCompare and drag your pointcloud to here, then you can select your pointcloud map by just clicking on the map at the DB tree panel.
    2. Then you can click subsample button on the top panel.

    CloudCompare
    1. Please select on your subsample method, we will use space for tutorial_vehicle.
    2. Then you can select options. For example, we need to determine minimum space between points. (Please be careful in this section, subsampling is depending on your map size, computer performance, etc.) We will set this value 0.2 for tutorial_vehicle's map.

    Pointcloud subsampling
    • After the subsampling process is finished, you should select pointcloud on the DB Tree panel as well.

    Select your downsampled pointcloud

    Now, you can save your downsampled pointcloud with ctrl + s or you can click save button from File bar. Then, this pointcloud can be used by autoware.

    "},{"location":"how-to-guides/integrating-autoware/creating-vehicle-and-sensor-model/","title":"Creating vehicle and sensor models","text":""},{"location":"how-to-guides/integrating-autoware/creating-vehicle-and-sensor-model/#creating-vehicle-and-sensor-models","title":"Creating vehicle and sensor models","text":""},{"location":"how-to-guides/integrating-autoware/creating-vehicle-and-sensor-model/#overview","title":"Overview","text":""},{"location":"how-to-guides/integrating-autoware/creating-vehicle-and-sensor-model/#sensor-model","title":"Sensor Model","text":"
    • Purpose: The sensor model includes the calibration (transformation) and launch files of the sensors used in the autonomous vehicle. This includes various sensors like LiDARs, cameras, radars, IMUs (Inertial Measurement Units), GPS units, etc.
    • Importance: Accurate sensor modeling is essential for perception tasks. Precise calibration values help understand the environment by processing sensor data, such as detecting objects, estimating distances, and creating a 3D representation of the surroundings.
    • Usage: The sensor model is utilized in Autoware for launching sensors, configuring their pipeline, and describing calibration values.
    • The sensor model (sensor kit) consists of the following three packages:
      • common_sensor_launch
      • <YOUR_VEHICLE_NAME>_sensor_kit_description
      • <YOUR_VEHICLE_NAME>_sensor_kit_launch

    Please refer to the creating sensor model page for creating your individual sensor model.

    For reference, here is the folder structure for the sample_sensor_kit_launch package in Autoware:

    sample_sensor_kit_launch/\n\u251c\u2500 common_sensor_launch/\n\u251c\u2500 sample_sensor_kit_description/\n\u2514\u2500 sample_sensor_kit_launch/\n
    "},{"location":"how-to-guides/integrating-autoware/creating-vehicle-and-sensor-model/#vehicle-model","title":"Vehicle Model","text":"
    • Purpose: The vehicle model includes individual vehicle specifications with dimensions, a 3D model of the vehicle (in .fbx or .dae format), etc.
    • Importance: An accurate vehicle model is crucial for motion planning and control.
    • Usage: The vehicle model is employed in Autoware to provide vehicle information for Autoware, including the 3D model of the vehicle.
    • The vehicle model comprises the following two packages:
      • <YOUR_VEHICLE_NAME>_vehicle_description
      • <YOUR_VEHICLE_NAME>_vehicle_launch

    Please consult the creating vehicle model page for creating your individual vehicle model.

    As a reference, here is the folder structure for the sample_vehicle_launch package in Autoware:

    sample_vehicle_launch/\n\u251c\u2500 sample_vehicle_description/\n\u2514\u2500 sample_vehicle_launch/\n
    "},{"location":"how-to-guides/integrating-autoware/creating-vehicle-and-sensor-model/calibrating-sensors/","title":"Calibrating your sensors","text":""},{"location":"how-to-guides/integrating-autoware/creating-vehicle-and-sensor-model/calibrating-sensors/#calibrating-your-sensors","title":"Calibrating your sensors","text":""},{"location":"how-to-guides/integrating-autoware/creating-vehicle-and-sensor-model/calibrating-sensors/#overview","title":"Overview","text":"

    Autoware expects to have multiple sensors attached to the vehicle as input to perception, localization, and planning stack. Autoware uses fusion techniques to combine information from multiple sensors. For this to work effectively, all sensors must be calibrated properly to align their coordinate systems, and their positions must be defined using either urdf files (as in sample_sensor_kit) or as tf launch files. In this documentation, we will explain TIER IV's CalibrationTools repository for the calibration process. Please look at Starting with TIER IV's CalibrationTools page for installation and usage of this tool.

    If you want to look at other calibration packages and methods, you can check out the following packages.

    "},{"location":"how-to-guides/integrating-autoware/creating-vehicle-and-sensor-model/calibrating-sensors/#other-packages-you-can-check-out","title":"Other packages you can check out","text":""},{"location":"how-to-guides/integrating-autoware/creating-vehicle-and-sensor-model/calibrating-sensors/#camera-calibration","title":"Camera calibration","text":""},{"location":"how-to-guides/integrating-autoware/creating-vehicle-and-sensor-model/calibrating-sensors/#intrinsic-calibration","title":"Intrinsic Calibration","text":"
    • Navigation2 provides a good tutorial for camera internal calibration.
    • AutoCore provides a light-weight tool.
    "},{"location":"how-to-guides/integrating-autoware/creating-vehicle-and-sensor-model/calibrating-sensors/#lidar-lidar-calibration","title":"Lidar-lidar calibration","text":""},{"location":"how-to-guides/integrating-autoware/creating-vehicle-and-sensor-model/calibrating-sensors/#lidar-lidar-calibration-tool-from-autocore","title":"Lidar-Lidar Calibration tool from Autocore","text":"

    LL-Calib on GitHub, provided by AutoCore, is a lightweight toolkit for online/offline 3D LiDAR to LiDAR calibration. It's based on local mapping and \"GICP\" method to derive the relation between main and sub lidar. Information on how to use the tool, troubleshooting tips and example rosbags can be found at the above link.

    "},{"location":"how-to-guides/integrating-autoware/creating-vehicle-and-sensor-model/calibrating-sensors/#lidar-camera-calibration","title":"Lidar-camera calibration","text":"

    Developed by MathWorks, The Lidar Camera Calibrator app enables you to interactively estimate the rigid transformation between a lidar sensor and a camera.

    https://ww2.mathworks.cn/help/lidar/ug/get-started-lidar-camera-calibrator.html

    SensorsCalibration toolbox v0.1: One more open source method for Lidar-camera calibration. This is a project for LiDAR to camera calibration,including automatic calibration and manual calibration

    https://github.com/PJLab-ADG/SensorsCalibration/blob/master/lidar2camera/README.md

    Developed by AutoCore, an easy-to-use lightweight toolkit for Lidar-camera-calibration is proposed. Only in three steps, a fully automatic calibration will be done.

    https://github.com/autocore-ai/calibration_tools/tree/main/lidar-cam-calib-related

    "},{"location":"how-to-guides/integrating-autoware/creating-vehicle-and-sensor-model/calibrating-sensors/#lidar-imu-calibration","title":"Lidar-IMU calibration","text":"

    Developed by APRIL Lab at Zhejiang University in China, the LI-Calib calibration tool is a toolkit for calibrating the 6DoF rigid transformation and the time offset between a 3D LiDAR and an IMU, based on continuous-time batch optimization. IMU-based cost and LiDAR point-to-surfel (surfel = surface element) distance are minimized jointly, which renders the calibration problem well-constrained in general scenarios.

    AutoCore has forked the original LI-Calib tool and overwritten the Lidar input for more general usage. Information on how to use the tool, troubleshooting tips and example rosbags can be found at the LI-Calib fork on GitHub.

    "},{"location":"how-to-guides/integrating-autoware/creating-vehicle-and-sensor-model/calibrating-sensors/calibration-tools/","title":"Starting with TIER IV's CalibrationTools","text":""},{"location":"how-to-guides/integrating-autoware/creating-vehicle-and-sensor-model/calibrating-sensors/calibration-tools/#starting-with-tier-ivs-calibrationtools","title":"Starting with TIER IV's CalibrationTools","text":""},{"location":"how-to-guides/integrating-autoware/creating-vehicle-and-sensor-model/calibrating-sensors/calibration-tools/#overview","title":"Overview","text":"

    Autoware expects to have multiple sensors attached to the vehicle as input to perception, localization, and planning stack. These sensors must be calibrated correctly, and their positions must be defined at sensor_kit_description and individual_params packages. In this tutorial, we will use TIER IV's CalibrationTools repository for the calibration.

    "},{"location":"how-to-guides/integrating-autoware/creating-vehicle-and-sensor-model/calibrating-sensors/calibration-tools/#setting-of-sensor_kit_base_link-position-with-respect-to-the-base_link","title":"Setting of sensor_kit_base_link position with respect to the base_link","text":"

    In previous section (creating the vehicle and sensor model), we mentioned about sensors_calibration.yaml. This file stores sensor_kit_base_link (child frame) position and orientation with respect to the base_link (parent frame). We need to update this relative position (all values were initially set equal to zero when file is created) with using CAD data of our vehicle.

    Our tutorial_vehicle base_link to sensor_kit_base_link transformation.

    So, our sensors_calibration.yaml file for our tutorial_vehicle should be like this:

    base_link:\nsensor_kit_base_link:\nx: 1.600000 # meter\ny: 0.0\nz: 1.421595 # 1.151595m + 0.270m\nroll: 0.0\npitch: 0.0\nyaw: 0.0\n

    You need to update this transformation value with respect to the sensor_kit_base_link frame. You can also use CAD values for GNSS/INS and IMU position in sensor_kit_calibration.yaml file. (Please don't forget to update the sensor_kit_calibration.yaml file in both the sensor_kit_launch and individual_params packages)

    "},{"location":"how-to-guides/integrating-autoware/creating-vehicle-and-sensor-model/calibrating-sensors/calibration-tools/#installing-tier-ivs-calibrationtools-repositories-on-autoware","title":"Installing TIER IV's CalibrationTools repositories on autoware","text":"

    After completing previous steps (creating your own autoware, creating a vehicle and sensor model etc.) we are ready to calibrate sensors which prepared their pipeline in creating the sensor model section.

    Firstly, we will clone CalibrationTools repositories in own autoware.

    cd <YOUR-OWN-AUTOWARE-DIRECTORY> # for example: cd autoware.tutorial_vehicle\nwget https://raw.githubusercontent.com/tier4/CalibrationTools/tier4/universe/calibration_tools.repos\nvcs import src < calibration_tools.repos\nrosdep install -y --from-paths src --ignore-src --rosdistro $ROS_DISTRO\n

    Then build the all packages after the all necessary changes are made on sensor model and vehicle model.

    colcon build --symlink-install --cmake-args -DCMAKE_BUILD_TYPE=Release\n
    "},{"location":"how-to-guides/integrating-autoware/creating-vehicle-and-sensor-model/calibrating-sensors/calibration-tools/#usage-of-calibrationtools","title":"Usage of CalibrationTools","text":"

    The CalibrationTools repository has several packages for calibrating different sensor pairs such as lidar-lidar, camera-lidar, ground-lidar etc. In order to calibrate our sensors, we will modify extrinsic_calibration_package for our sensor kit.

    For tutorial_vehicle, completed launch files when created following tutorial sections can be found here.

    • Manual Calibration
    • Lidar-Lidar Calibration
      • Ground Plane-Lidar Calibration
    • Intrinsic Camera Calibration
    • Lidar-Camera Calibration
    • Lidar-Imu Calibration
    "},{"location":"how-to-guides/integrating-autoware/creating-vehicle-and-sensor-model/calibrating-sensors/extrinsic-manual-calibration/","title":"Manual calibration for all sensors","text":""},{"location":"how-to-guides/integrating-autoware/creating-vehicle-and-sensor-model/calibrating-sensors/extrinsic-manual-calibration/#manual-calibration-for-all-sensors","title":"Manual calibration for all sensors","text":""},{"location":"how-to-guides/integrating-autoware/creating-vehicle-and-sensor-model/calibrating-sensors/extrinsic-manual-calibration/#overview","title":"Overview","text":"

    In this section, we will use Extrinsic Manual Calibration for extrinsic calibration of our sensors. After this process, we won't get accurate calibration results for final use, but we will have an initial calibration for other tools. For example, in the lidar-lidar or camera-lidar calibration phase, we will need initial calibration for getting accurate and successful calibration results.

    We need a sample bag file for the calibration process which includes raw lidar topics and camera topics. The following shows an example of a bag file used for calibration:

    ROS 2 Bag example of our calibration process
    Files:             rosbag2_2023_09_06-13_43_54_0.db3\nBag size:          18.3 GiB\nStorage id:        sqlite3\nDuration:          169.12s\nStart:             Sep  6 2023 13:43:54.902 (1693997034.902)\nEnd:               Sep  6 2023 13:46:43.914 (1693997203.914)\nMessages:          8504\nTopic information: Topic: /sensing/lidar/top/pointcloud_raw | Type: sensor_msgs/msg/PointCloud2 | Count: 1691 | Serialization Format: cdr\n                   Topic: /sensing/lidar/front/pointcloud_raw | Type: sensor_msgs/msg/PointCloud2 | Count: 1691 | Serialization Format: cdr\n                   Topic: /sensing/camera/camera0/image_rect | Type: sensor_msgs/msg/Image | Count: 2561 | Serialization Format: cdr\n                   Topic: /sensing/camera/camera0/camera_info | Type: sensor_msgs/msg/CameraInfo | Count: 2561 | Serialization Format: cdr\n
    "},{"location":"how-to-guides/integrating-autoware/creating-vehicle-and-sensor-model/calibrating-sensors/extrinsic-manual-calibration/#extrinsic-manual-based-calibration","title":"Extrinsic Manual-Based Calibration","text":""},{"location":"how-to-guides/integrating-autoware/creating-vehicle-and-sensor-model/calibrating-sensors/extrinsic-manual-calibration/#creating-launch-files","title":"Creating launch files","text":"

    First of all, we will start with creating launch file for extrinsic_calibration_manager package:

    cd <YOUR-OWN-AUTOWARE-DIRECTORY>/src/autoware/calibration_tools/sensor\ncd extrinsic_calibration_manager/launch\nmkdir <YOUR-OWN-SENSOR-KIT-NAME> # i.e. for our guide, it will ve mkdir tutorial_vehicle_sensor_kit\ncd <YOUR-OWN-SENSOR-KIT-NAME> # i.e. for our guide, it will ve cd tutorial_vehicle_sensor_kit\ntouch manual.launch.xml manual_sensor_kit.launch.xml manual_sensors.launch.xml\n

    We will be modifying these manual.launch.xml, manual_sensors.launch.xml and manual_sensor_kit.launch.xml by using TIER IV's sample sensor kit aip_x1. So, you should copy the contents of these three files from aip_x1 to your created files.

    "},{"location":"how-to-guides/integrating-autoware/creating-vehicle-and-sensor-model/calibrating-sensors/extrinsic-manual-calibration/#modifying-launch-files-according-to-your-sensor-kit","title":"Modifying launch files according to your sensor kit","text":"

    So, we can start modifying manual.launch.xml, please open this file on a text editor which will you prefer (code, gedit etc.).

    (Optionally) Let's start with adding vehicle_id and sensor model names: (Values are not important. These parameters will be overridden by launch arguments)

      <arg name=\"vehicle_id\" default=\"default\"/>\n\n  <let name=\"sensor_model\" value=\"aip_x1\"/>\n+ <?xml version=\"1.0\" encoding=\"UTF-8\"?>\n+ <launch>\n-   <arg name=\"vehicle_id\" default=\"default\"/>\n+   <arg name=\"vehicle_id\" default=\"<YOUR_VEHICLE_ID>\"/>\n+\n-   <arg name=\"sensor_model\" default=\"aip_x1\"/>\n+   <let name=\"sensor_model\" value=\"<YOUR_SENSOR_KIT_NAME>\"/>\n

    The final version of the file (manual.launch.xml) for tutorial_vehicle should be like this:

    Sample manual.launch.xml file for tutorial vehicle
    <?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<launch>\n<arg name=\"vehicle_id\" default=\"tutorial_vehicle\"/>\n\n<let name=\"sensor_model\" value=\"tutorial_vehicle_sensor_kit\"/>\n\n<group>\n<push-ros-namespace namespace=\"sensor_kit\"/>\n<include file=\"$(find-pkg-share extrinsic_calibration_manager)/launch/$(var sensor_model)/manual_sensor_kit.launch.xml\">\n<arg name=\"vehicle_id\" value=\"$(var vehicle_id)\"/>\n</include>\n</group>\n\n<group>\n<push-ros-namespace namespace=\"sensors\"/>\n<include file=\"$(find-pkg-share extrinsic_calibration_manager)/launch/$(var sensor_model)/manual_sensors.launch.xml\">\n<arg name=\"vehicle_id\" value=\"$(var vehicle_id)\"/>\n</include>\n</group>\n\n</launch>\n

    After the completing of manual.launch.xml file, we will be ready to implement manual_sensor_kit.launch.xml for the own sensor model's sensor_kit_calibration.yaml:

    Optionally, you can modify sensor_model and vehicle_id over this xml snippet as well:

    ...\n  <arg name=\"vehicle_id\" default=\"default\"/>\n\n  <let name=\"sensor_model\" value=\"aip_x1\"/>\n+ <?xml version=\"1.0\" encoding=\"UTF-8\"?>\n+ <launch>\n-   <arg name=\"vehicle_id\" default=\"default\"/>\n+   <arg name=\"vehicle_id\" default=\"<YOUR_VEHICLE_ID>\"/>\n+\n-   <arg name=\"sensor_model\" default=\"aip_x1\"/>\n+   <let name=\"sensor_model\" value=\"<YOUR_SENSOR_KIT_NAME>\"/>\n...\n

    Then, we will add all our sensor frames on extrinsic_calibration_manager as child frames:

       <!-- extrinsic_calibration_manager -->\n-  <node pkg=\"extrinsic_calibration_manager\" exec=\"extrinsic_calibration_manager\" name=\"extrinsic_calibration_manager\" output=\"screen\">\n-    <param name=\"parent_frame\" value=\"$(var parent_frame)\"/>\n-    <param name=\"child_frames\" value=\"\n-    [velodyne_top_base_link,\n-    livox_front_left_base_link,\n-    livox_front_center_base_link,\n-    livox_front_right_base_link]\"/>\n-  </node>\n+   <node pkg=\"extrinsic_calibration_manager\" exec=\"extrinsic_calibration_manager\" name=\"extrinsic_calibration_manager\" output=\"screen\">\n+     <param name=\"parent_frame\" value=\"$(var parent_frame)\"/>\n+     <!-- add your sensor frames here -->\n+     <param name=\"child_frames\" value=\"\n+     [<YOUE_SENSOR_BASE_LINK>,\n+     YOUE_SENSOR_BASE_LINK,\n+     YOUE_SENSOR_BASE_LINK,\n+     YOUE_SENSOR_BASE_LINK\n+     ...]\"/>\n+   </node>\n

    For tutorial_vehicle there are four sensors (two lidar, one camera, one gnss/ins), so it will be like this:

    i.e extrinsic_calibration_manager child_frames for tutorial_vehicle
    +   <!-- extrinsic_calibration_manager -->\n+   <node pkg=\"extrinsic_calibration_manager\" exec=\"extrinsic_calibration_manager\" name=\"extrinsic_calibration_manager\" output=\"screen\">\n+     <param name=\"parent_frame\" value=\"$(var parent_frame)\"/>\n+     <!-- add your sensor frames here -->\n+     <param name=\"child_frames\" value=\"\n+     [rs_helios_top_base_link,\n+     rs_bpearl_front_base_link,\n+     camera0/camera_link,\n+     gnss_link]\"/>\n+   </node>\n

    Lastly, we will launch a manual calibrator each frame for our sensors, please update namespace (ns) and child_frame argument on calibrator.launch.xml launch file argument:

    -  <include file=\"$(find-pkg-share extrinsic_manual_calibrator)/launch/calibrator.launch.xml\">\n-    <arg name=\"ns\" value=\"$(var parent_frame)/velodyne_top_base_link\"/>\n-    <arg name=\"parent_frame\" value=\"$(var parent_frame)\"/>\n-    <arg name=\"child_frame\" value=\"velodyne_top_base_link\"/>\n-  </include>\n+  <!-- extrinsic_manual_calibrator -->\n+  <include file=\"$(find-pkg-share extrinsic_manual_calibrator)/launch/calibrator.launch.xml\">\n+    <arg name=\"ns\" value=\"$(var parent_frame)/<YOUR_SENSOR_BASE_LINK>\"/>\n+    <arg name=\"parent_frame\" value=\"$(var parent_frame)\"/>\n+    <arg name=\"child_frame\" value=\"<YOUR_SENSOR_BASE_LINK>\"\"/>\n+  </include>\n+\n+  ...\n+  ...\n+  ...\n+  ...\n+  ...\n+\n
    i.e., calibrator.launch.xml for each tutorial_vehicle's sensor kit
    +  <!-- extrinsic_manual_calibrator -->\n+  <include file=\"$(find-pkg-share extrinsic_manual_calibrator)/launch/calibrator.launch.xml\">\n+    <arg name=\"ns\" value=\"$(var parent_frame)/rs_helios_top_base_link\"/>\n+    <arg name=\"parent_frame\" value=\"$(var parent_frame)\"/>\n+    <arg name=\"child_frame\" value=\"rs_helios_top_base_link\"/>\n+  </include>\n+\n+  <include file=\"$(find-pkg-share extrinsic_manual_calibrator)/launch/calibrator.launch.xml\">\n+    <arg name=\"ns\" value=\"$(var parent_frame)/rs_bpearl_front_base_link\"/>\n+    <arg name=\"parent_frame\" value=\"$(var parent_frame)\"/>\n+    <arg name=\"child_frame\" value=\"rs_bpearl_front_base_link\"/>\n+  </include>\n+\n+  <include file=\"$(find-pkg-share extrinsic_manual_calibrator)/launch/calibrator.launch.xml\">\n+    <arg name=\"ns\" value=\"$(var parent_frame)/camera0/camera_link\"/>\n+    <arg name=\"parent_frame\" value=\"$(var parent_frame)\"/>\n+    <arg name=\"child_frame\" value=\"camera0/camera_link\"/>\n+  </include>\n+\n+  <include file=\"$(find-pkg-share extrinsic_manual_calibrator)/launch/calibrator.launch.xml\">\n+    <arg name=\"ns\" value=\"$(var parent_frame)/gnss_link\"/>\n+    <arg name=\"parent_frame\" value=\"$(var parent_frame)\"/>\n+    <arg name=\"child_frame\" value=\"gnss_link\"/>\n+  </include>\n+ </launch>\n

    The final version of the manual_sensor_kit.launch.xml for tutorial_vehicle should be like this:

    Sample manual_sensor_kit.launch.xml for tutorial_vehicle
    <?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<launch>\n<arg name=\"vehicle_id\" default=\"tutorial_vehicle\"/> <!-- You can update with your own vehicle_id -->\n\n<let name=\"sensor_model\" value=\"tutorial_vehicle_sensor_kit\"/> <!-- You can update with your own sensor model -->\n<let name=\"parent_frame\" value=\"sensor_kit_base_link\"/>\n\n<!-- extrinsic_calibration_client -->\n<arg name=\"src_yaml\" default=\"$(find-pkg-share individual_params)/config/$(var vehicle_id)/$(var sensor_model)/sensor_kit_calibration.yaml\"/>\n<arg name=\"dst_yaml\" default=\"$(env HOME)/sensor_kit_calibration.yaml\"/>\n\n<node pkg=\"extrinsic_calibration_client\" exec=\"extrinsic_calibration_client\" name=\"extrinsic_calibration_client\" output=\"screen\">\n<param name=\"src_path\" value=\"$(var src_yaml)\"/>\n<param name=\"dst_path\" value=\"$(var dst_yaml)\"/>\n</node>\n\n<!-- extrinsic_calibration_manager -->\n<node pkg=\"extrinsic_calibration_manager\" exec=\"extrinsic_calibration_manager\" name=\"extrinsic_calibration_manager\" output=\"screen\">\n<param name=\"parent_frame\" value=\"$(var parent_frame)\"/>\n<!-- Please Update with your own sensor frames -->\n<param name=\"child_frames\" value=\"\n    [rs_helios_top_base_link,\n    rs_bpearl_front_base_link,\n    camera0/camera_link,\n    gnss_link]\"/>\n</node>\n\n<!-- extrinsic_manual_calibrator -->\n<!-- Please create a launch for all sensors that you used. -->\n<include file=\"$(find-pkg-share extrinsic_manual_calibrator)/launch/calibrator.launch.xml\">\n<arg name=\"ns\" value=\"$(var parent_frame)/rs_helios_top_base_link\"/>\n<arg name=\"parent_frame\" value=\"$(var parent_frame)\"/>\n<arg name=\"child_frame\" value=\"rs_helios_top_base_link\"/>\n</include>\n\n<include file=\"$(find-pkg-share extrinsic_manual_calibrator)/launch/calibrator.launch.xml\">\n<arg name=\"ns\" value=\"$(var parent_frame)/rs_bpearl_front_base_link\"/>\n<arg name=\"parent_frame\" value=\"$(var parent_frame)\"/>\n<arg name=\"child_frame\" value=\"rs_bpearl_front_base_link\"/>\n</include>\n\n<include file=\"$(find-pkg-share extrinsic_manual_calibrator)/launch/calibrator.launch.xml\">\n<arg name=\"ns\" value=\"$(var parent_frame)/camera0/camera_link\"/>\n<arg name=\"parent_frame\" value=\"$(var parent_frame)\"/>\n<arg name=\"child_frame\" value=\"camera0/camera_link\"/>\n</include>\n\n<include file=\"$(find-pkg-share extrinsic_manual_calibrator)/launch/calibrator.launch.xml\">\n<arg name=\"ns\" value=\"$(var parent_frame)/gnss_link\"/>\n<arg name=\"parent_frame\" value=\"$(var parent_frame)\"/>\n<arg name=\"child_frame\" value=\"gnss_link\"/>\n</include>\n</launch>\n

    You can update manual_sensors.launch.xml file according to your modified sensors_calibration.yaml file. Since we will not be calibrating the sensor directly with respect to the base_link in tutorial_vehicle, we will not change this file.

    "},{"location":"how-to-guides/integrating-autoware/creating-vehicle-and-sensor-model/calibrating-sensors/extrinsic-manual-calibration/#calibrating-sensors-with-extrinsic-manual-calibrator","title":"Calibrating sensors with extrinsic manual calibrator","text":"

    After the completion of manual.launch.xml and manual_sensor_kit.launch xml file for extrinsic_calibration_manager package, we need to build package:

    colcon build --symlink-install --cmake-args -DCMAKE_BUILD_TYPE=Release --packages-select extrinsic_calibration_manager\n

    So, we are ready to launch and use manual calibrator:

    ros2 launch extrinsic_calibration_manager calibration.launch.xml mode:=manual sensor_model:=<OWN-SENSOR-KIT> vehicle_model:=<OWN-VEHICLE-MODEL> vehicle_id:=<VEHICLE-ID>\n

    For tutorial vehicle:

    ros2 launch extrinsic_calibration_manager calibration.launch.xml mode:=manual sensor_model:=tutorial_vehicle_sensor_kit vehicle_model:=tutorial_vehicle vehicle_id:=tutorial_vehicle\n

    Then play ROS 2 bag file:

    ros2 bag play <rosbag_path> --clock -l -r 0.2 \\\n--remap /tf:=/null/tf /tf_static:=/null/tf_static # if tf is recorded\n

    You will show to a manual rqt_reconfigure window, we will update calibrations by hand according to the rviz2 results of sensors.

    • Press Refresh button then press Expand All button. The frames on tutorial_vehicle should like this:

    • Please write the target frame name in Filter area (i.e., front, helios etc.) and select tunable_static_tf_broadcaster_node, then you can adjust tf_x, tf_y, tf_z, tf_roll, tf_pitch and tf_yaw values over RQT panel.
    • If manual adjusting is finished, you can save your calibration results via this command:
    ros2 topic pub /done std_msgs/Bool \"data: true\"\n
    • Then you can check the output file in $HOME/*.yaml.

    Warning

    The initial calibration process can be important before the using other calibrations. We will look into the lidar-lidar calibration and camera-lidar calibration. At this point, there is hard to calibrate two sensors with exactly same frame, so you should find approximately (it not must be perfect) calibration pairs between sensors.

    Here is the video for demonstrating a manual calibration process on tutorial_vehicle:

    "},{"location":"how-to-guides/integrating-autoware/creating-vehicle-and-sensor-model/calibrating-sensors/ground-lidar-calibration/","title":"Ground-Lidar calibration","text":""},{"location":"how-to-guides/integrating-autoware/creating-vehicle-and-sensor-model/calibrating-sensors/ground-lidar-calibration/#ground-lidar-calibration","title":"Ground-Lidar calibration","text":""},{"location":"how-to-guides/integrating-autoware/creating-vehicle-and-sensor-model/calibrating-sensors/ground-lidar-calibration/#overview","title":"Overview","text":"

    Ground-Lidar Calibration method operates under the assumption that the area surrounding the vehicle can be represented as a flat surface. So, you must find as wide and flat a surface as possible for ROS 2 bag recording. The method then modifies the calibration transformation in a way that aligns the points corresponding to the ground within the point cloud with the XY plane of the base_link. This means that only the z, roll, and pitch values of the tf undergo calibration, while the remaining x, y, and yaw values must be calibrated using other methods, such as manual adjustment or mapping-based lidar-lidar calibration.

    You need to apply this calibration method to each lidar separately, so our bag should contain all lidars to be calibrated.

    We need a sample bag file for the ground-lidar calibration process which includes raw lidar topics.

    ROS 2 Bag example of our ground-based calibration process for tutorial_vehicle
    Files:             rosbag2_2023_09_05-11_23_50_0.db3\nBag size:          3.8 GiB\nStorage id:        sqlite3\nDuration:          112.702s\nStart:             Sep  5 2023 11:23:51.105 (1693902231.105)\nEnd:               Sep  5 2023 11:25:43.808 (1693902343.808)\nMessages:          2256\nTopic information: Topic: /sensing/lidar/front/pointcloud_raw | Type: sensor_msgs/msg/PointCloud2 | Count: 1128 | Serialization Format: cdr\n                   Topic: /sensing/lidar/top/pointcloud_raw | Type: sensor_msgs/msg/PointCloud2 | Count: 1128 | Serialization Format: cdr\n
    "},{"location":"how-to-guides/integrating-autoware/creating-vehicle-and-sensor-model/calibrating-sensors/ground-lidar-calibration/#ground-lidar-calibration_1","title":"Ground-lidar calibration","text":""},{"location":"how-to-guides/integrating-autoware/creating-vehicle-and-sensor-model/calibrating-sensors/ground-lidar-calibration/#creating-launch-files","title":"Creating launch files","text":"

    We will start with creating launch file four our own vehicle like the previous sections process:

    cd <YOUR-OWN-AUTOWARE-DIRECTORY>/src/autoware/calibration_tools/sensor\ncd extrinsic_calibration_manager/launch\ncd <YOUR-OWN-SENSOR-KIT-NAME> # i.e. for our guide, it will ve cd tutorial_vehicle_sensor_kit which is created in manual calibration\ntouch ground_plane.launch.xml ground_plane_sensor_kit.launch.xml\n

    We will be modifying these ground_plane.launch.xml and ground_plane_sensor_kit.launch.xml by using TIER IV's sample sensor kit aip_x1. So, you should copy the contents of these two files from aip_x1 to your created files.

    "},{"location":"how-to-guides/integrating-autoware/creating-vehicle-and-sensor-model/calibrating-sensors/ground-lidar-calibration/#modifying-launch-files-according-to-your-sensor-kit","title":"Modifying launch files according to your sensor kit","text":"

    (Optionally) Let's start with adding vehicle_id and sensor model names: (Values are not important. These parameters will be overridden by launch arguments)

      <arg name=\"vehicle_id\" default=\"default\"/>\n\n  <let name=\"sensor_model\" value=\"aip_x1\"/>\n+ <?xml version=\"1.0\" encoding=\"UTF-8\"?>\n+ <launch>\n-   <arg name=\"vehicle_id\" default=\"default\"/>\n+   <arg name=\"vehicle_id\" default=\"<YOUR_VEHICLE_ID>\"/>\n+\n-   <arg name=\"sensor_model\" default=\"aip_x1\"/>\n+   <let name=\"sensor_model\" value=\"<YOUR_SENSOR_KIT_NAME>\"/>\n

    The final version of the file (ground_plane.launch.xml) for tutorial_vehicle should be like this:

    Sample ground_plane.launch.xml file for tutorial vehicle
    <launch>\n<arg name=\"vehicle_id\" default=\"tutorial_vehicle\"/>\n<let name=\"sensor_model\" value=\"tutorial_vehicle_sensor_kit\"/>\n\n<group>\n<push-ros-namespace namespace=\"sensor_kit\"/>\n<include file=\"$(find-pkg-share extrinsic_calibration_manager)/launch/$(var sensor_model)/ground_plane_sensor_kit.launch.xml\">\n<arg name=\"vehicle_id\" value=\"$(var vehicle_id)\"/>\n</include>\n</group>\n</launch>\n

    After the completing of ground_plane.launch.xml file, we will be ready to implement ground_plane_sensor_kit.launch.xml for the own sensor model.

    Optionally, (don't forget, these parameters will be overridden by launch arguments.) you can modify sensor_kit and vehicle_id as ground_plane.launch.xmlover this xml snippet: (You can change rviz_profile path after the saving rviz config as video which included at the end of the page)

    + <?xml version=\"1.0\" encoding=\"UTF-8\"?>\n+ <launch>\n-   <arg name=\"vehicle_id\" default=\"default\"/>\n+   <arg name=\"vehicle_id\" default=\"<YOUR_VEHICLE_ID>\"/>\n-   <arg name=\"sensor_model\" default=\"aip_x1\"/>\n+   <let name=\"sensor_model\" value=\"<YOUR_SENSOR_KIT_NAME>\"/>\n   <let name=\"base_frame\" value=\"base_link\"/>\n    <let name=\"parent_frame\" value=\"sensor_kit_base_link\"/>\n

    If you save rviz config file before for the ground-lidar calibration process:

    - <let name=\"rviz_profile\" value=\"$(find-pkg-share extrinsic_ground_plane_calibrator)/rviz/velodyne_top.rviz\"/>\n+ <let name=\"rviz_profile\" value=\"$(find-pkg-share extrinsic_ground_plane_calibrator)/rviz/<YOUR-RVIZ-CONFIG>.rviz\"/>\n

    Then, we will add all our sensor frames on extrinsic_calibration_manager as child frames:

        <!-- extrinsic_calibration_manager -->\n-   <node pkg=\"extrinsic_calibration_manager\" exec=\"extrinsic_calibration_manager\" name=\"extrinsic_calibration_manager\" output=\"screen\">\n-     <param name=\"parent_frame\" value=\"$(var parent_frame)\"/>\n-     <param name=\"child_frames\" value=\"\n-     [velodyne_top_base_link,\n-     livox_front_left_base_link,\n-     livox_front_center_base_link,\n-     livox_front_right_base_link]\"/>\n-   </node>\n+   <node pkg=\"extrinsic_calibration_manager\" exec=\"extrinsic_calibration_manager\" name=\"extrinsic_calibration_manager\" output=\"screen\">\n+     <param name=\"parent_frame\" value=\"$(var parent_frame)\"/>\n+     <!-- add your sensor frames here -->\n+     <param name=\"child_frames\" value=\"\n+     [<YOUE_SENSOR_BASE_LINK>,\n+     YOUE_SENSOR_BASE_LINK,\n+     YOUE_SENSOR_BASE_LINK,\n+     YOUE_SENSOR_BASE_LINK\n+     ...]\"/>\n+   </node>\n

    For tutorial_vehicle there are two lidar sensors (rs_helios_top and rs_bpearl_front), so it will be like this:

    i.e extrinsic_calibration_manager child_frames for tutorial_vehicle
    +   <!-- extrinsic_calibration_manager -->\n+   <node pkg=\"extrinsic_calibration_manager\" exec=\"extrinsic_calibration_manager\" name=\"extrinsic_calibration_manager\" output=\"screen\">\n+     <param name=\"parent_frame\" value=\"$(var parent_frame)\"/>\n+     <!-- add your sensor frames here -->\n+     <param name=\"child_frames\" value=\"\n+     [rs_helios_top_base_link,\n+     rs_bpearl_front_base_link]\"/>\n+   </node>\n

    After that we will add our lidar sensor configurations on ground-based calibrator, to do that we will add these lines our ground_plane_sensor_kit.launch.xml file:

    -  <group>\n-    <include file=\"$(find-pkg-share extrinsic_ground_plane_calibrator)/launch/calibrator.launch.xml\">\n-      <arg name=\"ns\" value=\"$(var parent_frame)/velodyne_top_base_link\"/>\n-      <arg name=\"base_frame\" value=\"$(var base_frame)\"/>\n-      <arg name=\"parent_frame\" value=\"$(var parent_frame)\"/>\n-      <arg name=\"child_frame\" value=\"velodyne_top_base_link\"/>\n-      <arg name=\"pointcloud_topic\" value=\"/sensing/lidar/top/pointcloud_raw\"/>\n-    </include>\n-  </group>\n+  <group>\n+   <include file=\"$(find-pkg-share extrinsic_ground_plane_calibrator)/launch/calibrator.launch.xml\">\n+     <arg name=\"ns\" value=\"$(var parent_frame)/YOUR_SENSOR_BASE_LINK\"/>\n+     <arg name=\"base_frame\" value=\"$(var base_frame)\"/>\n+     <arg name=\"parent_frame\" value=\"$(var parent_frame)\"/>\n+     <arg name=\"child_frame\" value=\"YOUR_SENSOR_BASE_LINK\"/>\n+     <arg name=\"pointcloud_topic\" value=\"<YOUR_SENSOR_TOPIC_NAME>\"/>\n+   </include>\n+ </group>\n+  ...\n+  ...\n+  ...\n+  ...\n+  ...\n+\n
    i.e., launch calibrator.launch.xml for each tutorial_vehicle's lidar
      <!-- rs_helios_top_base_link: extrinsic_ground_plane_calibrator -->\n<group>\n<include file=\"$(find-pkg-share extrinsic_ground_plane_calibrator)/launch/calibrator.launch.xml\">\n<arg name=\"ns\" value=\"$(var parent_frame)/rs_helios_top_base_link\"/>\n<arg name=\"base_frame\" value=\"$(var base_frame)\"/>\n<arg name=\"parent_frame\" value=\"$(var parent_frame)\"/>\n<arg name=\"child_frame\" value=\"rs_helios_top_base_link\"/>\n<arg name=\"pointcloud_topic\" value=\"/sensing/lidar/top/pointcloud_raw\"/>\n</include>\n</group>\n\n<!-- rs_bpearl_front_base_link: extrinsic_ground_plane_calibrator -->\n<group>\n<include file=\"$(find-pkg-share extrinsic_ground_plane_calibrator)/launch/calibrator.launch.xml\">\n<arg name=\"ns\" value=\"$(var parent_frame)/rs_bpearl_front_base_link\"/>\n<arg name=\"base_frame\" value=\"$(var base_frame)\"/>\n<arg name=\"parent_frame\" value=\"$(var parent_frame)\"/>\n<arg name=\"child_frame\" value=\"rs_bpearl_front_base_link\"/>\n<arg name=\"pointcloud_topic\" value=\"/sensing/lidar/front/pointcloud_raw\"/>\n</include>\n</group>\n\n<node pkg=\"rviz2\" exec=\"rviz2\" name=\"rviz2\" output=\"screen\" args=\"-d $(var rviz_profile)\" if=\"$(var calibration_rviz)\"/>\n</launch>\n

    The ground_plane_sensor_kit.launch.xml launch file for tutorial_vehicle should be this:

    Sample ground_plane_sensor_kit.launch.xml for tutorial_vehicle
    <?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<launch>\n<arg name=\"vehicle_id\" default=\"tutorial_vehicle\"/>\n<let name=\"sensor_model\" value=\"tutorial_vehicle_sensor_kit\"/>\n<let name=\"base_frame\" value=\"base_link\"/>\n<let name=\"parent_frame\" value=\"sensor_kit_base_link\"/>\n<let name=\"rviz_profile\" value=\"$(find-pkg-share extrinsic_ground_plane_calibrator)/rviz/velodyne_top.rviz\"/>\n<arg name=\"calibration_rviz\" default=\"true\"/>\n\n<!-- extrinsic_calibration_client -->\n<arg name=\"src_yaml\" default=\"$(find-pkg-share individual_params)/config/$(var vehicle_id)/$(var sensor_model)/sensor_kit_calibration.yaml\"/>\n<arg name=\"dst_yaml\" default=\"$(env HOME)/sensor_kit_calibration.yaml\"/>\n\n<node pkg=\"extrinsic_calibration_client\" exec=\"extrinsic_calibration_client\" name=\"extrinsic_calibration_client\" output=\"screen\">\n<param name=\"src_path\" value=\"$(var src_yaml)\"/>\n<param name=\"dst_path\" value=\"$(var dst_yaml)\"/>\n</node>\n\n<!-- extrinsic_calibration_manager -->\n<node pkg=\"extrinsic_calibration_manager\" exec=\"extrinsic_calibration_manager\" name=\"extrinsic_calibration_manager\" output=\"screen\">\n<param name=\"parent_frame\" value=\"$(var parent_frame)\"/>\n<param name=\"child_frames\" value=\"\n    [rs_helios_top_base_link,\n    rs_bpearl_front_base_link]\"/>\n</node>\n\n<!-- rs_helios_top_base_link: extrinsic_ground_plane_calibrator -->\n<group>\n<include file=\"$(find-pkg-share extrinsic_ground_plane_calibrator)/launch/calibrator.launch.xml\">\n<arg name=\"ns\" value=\"$(var parent_frame)/rs_helios_top_base_link\"/>\n<arg name=\"base_frame\" value=\"$(var base_frame)\"/>\n<arg name=\"parent_frame\" value=\"$(var parent_frame)\"/>\n<arg name=\"child_frame\" value=\"rs_helios_top_base_link\"/>\n<arg name=\"pointcloud_topic\" value=\"/sensing/lidar/top/pointcloud_raw\"/>\n</include>\n</group>\n\n<!-- rs_bpearl_front_base_link: extrinsic_ground_plane_calibrator -->\n<group>\n<include file=\"$(find-pkg-share extrinsic_ground_plane_calibrator)/launch/calibrator.launch.xml\">\n<arg name=\"ns\" value=\"$(var parent_frame)/rs_bpearl_front_base_link\"/>\n<arg name=\"base_frame\" value=\"$(var base_frame)\"/>\n<arg name=\"parent_frame\" value=\"$(var parent_frame)\"/>\n<arg name=\"child_frame\" value=\"rs_bpearl_front_base_link\"/>\n<arg name=\"pointcloud_topic\" value=\"/sensing/lidar/front/pointcloud_raw\"/>\n</include>\n</group>\n\n<node pkg=\"rviz2\" exec=\"rviz2\" name=\"rviz2\" output=\"screen\" args=\"-d $(var rviz_profile)\" if=\"$(var calibration_rviz)\"/>\n</launch>\n
    "},{"location":"how-to-guides/integrating-autoware/creating-vehicle-and-sensor-model/calibrating-sensors/ground-lidar-calibration/#ground-plane-lidar-calibration-process-with-extrinsic-ground-plane-calibrator","title":"Ground plane-lidar calibration process with extrinsic ground-plane calibrator","text":"

    After completing mapping_based.launch.xml and mapping_based_sensor_kit.launch.xml launch files for own sensor kit; now we are ready to calibrate our lidars. First of all, we need to build extrinsic_calibration_manager package:

    colcon build --symlink-install --cmake-args -DCMAKE_BUILD_TYPE=Release --packages-select extrinsic_calibration_manager\n

    So, we are ready to launch and use ground-based lidar-ground calibrator.

    ros2 launch extrinsic_calibration_manager calibration.launch.xml mode:=ground_plane sensor_model:=<OWN-SENSOR-KIT> vehicle_model:=<OWN-VEHICLE-MODEL> vehicle_id:=<VEHICLE-ID>\n

    For tutorial vehicle:

    ros2 launch extrinsic_calibration_manager calibration.launch.xml mode:=ground_plane sensor_model:=tutorial_vehicle_sensor_kit vehicle_model:=tutorial_vehicle vehicle_id:=tutorial_vehicle\n

    You will show the rviz2 screen with several configurations, you need to update it with your sensor information topics, sensor_frames and pointcloud_inlier_topics like the video, which included an end of the document. Also, you can save the rviz2 config on rviz directory, so you can use it later with modifying mapping_based_sensor_kit.launch.xml.

    extrinsic_mapping_based_calibrator/\n   \u2514\u2500 rviz/\n+        \u2514\u2500 tutorial_vehicle_sensor_kit.rviz\n

    Then play ROS 2 bag file, the calibration process will be started:

    ros2 bag play <rosbag_path> --clock -l -r 0.2 \\\n--remap /tf:=/null/tf /tf_static:=/null/tf_static # if tf is recorded\n

    Since the calibration process is done automatically, you can see the sensor_kit_calibration.yaml in your $HOME directory after the calibration process is complete.

    Before Ground Plane - Lidar Calibration After Ground Plane - Lidar Calibration

    Here is the video for demonstrating the ground plane - lidar calibration process on tutorial_vehicle:

    "},{"location":"how-to-guides/integrating-autoware/creating-vehicle-and-sensor-model/calibrating-sensors/intrinsic-camera-calibration/","title":"Intrinsic camera calibration","text":""},{"location":"how-to-guides/integrating-autoware/creating-vehicle-and-sensor-model/calibrating-sensors/intrinsic-camera-calibration/#intrinsic-camera-calibration","title":"Intrinsic camera calibration","text":""},{"location":"how-to-guides/integrating-autoware/creating-vehicle-and-sensor-model/calibrating-sensors/intrinsic-camera-calibration/#overview","title":"Overview","text":"

    Intrinsic camera calibration is the process of determining the internal parameters of a camera which will be used when projecting 3D information into images. These parameters include focal length, optical center, and lens distortion coefficients. In order to perform camera Intrinsic calibration, we will use TIER IV's Intrinsic Camera Calibrator tool. First of all, we need a calibration board which can be dot, chess or apriltag grid board. In this tutorial, we will use this 7x7 chess board consisting of 7 cm squares:

    Our 7x7 calibration chess board for this tutorial section.

    Here are some calibration board samples from Intrinsic Camera Calibrator page:

    • Chess boards (6x8 example)
    • Circle dot boards (6x8 example)
    • Apriltag grid board (3x4 example)

    If you want to use bag file for a calibration process, the bag file must include image_raw topic of your camera sensor, but you can perform calibration with real time. (recommended)

    ROS 2 Bag example for intrinsic camera calibration process
    Files:             rosbag2_2023_09_18-16_19_08_0.db3\nBag size:          12.0 GiB\nStorage id:        sqlite3\nDuration:          135.968s\nStart:             Sep 18 2023 16:19:08.966 (1695043148.966)\nEnd:               Sep 18 2023 16:21:24.934 (1695043284.934)\nMessages:          4122\nTopic information: Topic: /sensing/camera/camera0/image_raw | Type: sensor_msgs/msg/Image | Count: 2061 | Serialization Format: cdr\n
    "},{"location":"how-to-guides/integrating-autoware/creating-vehicle-and-sensor-model/calibrating-sensors/intrinsic-camera-calibration/#intrinsic-camera-calibration_1","title":"Intrinsic camera calibration","text":"

    Unlike other calibration packages in our tutorials, this package does not need to create an initialization file. So we can start with launching intrinsic calibrator package.

    cd <YOUR-OWN-AUTOWARE-DIRECTORY>\nsource install/setup.bash\n

    After that, we will launch intrinsic calibrator:

    ros2 launch intrinsic_camera_calibrator calibrator.launch.xml\n

    Then, initial configuration and camera intrinsic calibration panels will show up. We set initial configurations for our calibration.

    Initial configuration panel

    We set our image source (it can be \"ROS topic\", \"ROS bag\" or \"Image files\") from the source options section. We will calibrate our camera with \"ROS topic\" source. After selecting an image source from this panel, we need to configure \"Board options\" as well. The calibration board can be Chess board, Dot board or Apriltag. Also, we need to select board parameters, to do that, click the \"Board parameters\" button and set row, column, and cell size.

    After the setting of image source and board parameters, we are ready for the calibration process. Please click the start button, you will see Topic configuration panel. Please select the appropriate camera raw topic for the calibration process.

    Topic configuration panel

    Then you are ready to calibration. Please collect data with different X-Y axis, sizes and skews. You can see your collected data statistics with the clicking view data collection statistics. For more information, please refer to Intrinsic Camera Calibrator page.

    Intrinsic calibrator interface

    After the data collection is completed, you can click the \"Calibrate\" button to perform the calibration process. After the calibration is completed, you will see data visualization for the calibration result statistics. You can observe your calibration results with changing \"Image view type \"Source unrectified\" to \"Source rectified\". If your calibration is successful (there should be no distortion in the rectified image), you can save your calibration results with \"Save\" button. The output will be named as <YOUR-CAMERA-NAME>_info.yaml. So, you use this file with your camera driver directly.

    Here is the video for demonstrating the intrinsic camera calibration process on tutorial_vehicle:

    "},{"location":"how-to-guides/integrating-autoware/creating-vehicle-and-sensor-model/calibrating-sensors/lidar-camera-calibration/","title":"Lidar-Camera calibration","text":""},{"location":"how-to-guides/integrating-autoware/creating-vehicle-and-sensor-model/calibrating-sensors/lidar-camera-calibration/#lidar-camera-calibration","title":"Lidar-Camera calibration","text":""},{"location":"how-to-guides/integrating-autoware/creating-vehicle-and-sensor-model/calibrating-sensors/lidar-camera-calibration/#overview","title":"Overview","text":"

    Lidar-camera calibration is a crucial process in the field of autonomous driving and robotics, where both lidar sensors and cameras are used for perception. The goal of calibration is to accurately align the data from these different sensors in order to create a comprehensive and coherent representation of the environment by projecting lidar point onto camera image. At this tutorial, we will explain TIER IV's interactive camera calibrator. Also, If you have aruco marker boards for calibration, another Lidar-Camera calibration method is included in TIER IV's CalibrationTools repository.

    Warning

    You need to apply intrinsic calibration before starting lidar-camera extrinsic calibration process. Also, please obtain the initial calibration results from the Manual Calibration section. This is crucial for obtaining accurate results from this tool. We will utilize the initial calibration parameters that were calculated in the previous step of this tutorial. To apply these initial values in the calibration tools, please update your sensor calibration files within the individual parameter package.

    Your bag file must include calibration lidar topic and camera topics. Camera topics can be compressed or raw topics, but remember we will update interactive calibrator launch argument use_compressed according to the topic type.

    ROS 2 Bag example of our calibration process (there is only one camera mounted) If you have multiple cameras, please add camera_info and image topics as well.
    Files:             rosbag2_2023_09_12-13_57_03_0.db3\nBag size:          5.8 GiB\nStorage id:        sqlite3\nDuration:          51.419s\nStart:             Sep 12 2023 13:57:03.691 (1694516223.691)\nEnd:               Sep 12 2023 13:57:55.110 (1694516275.110)\nMessages:          2590\nTopic information: Topic: /sensing/lidar/top/pointcloud_raw | Type: sensor_msgs/msg/PointCloud2 | Count: 515 | Serialization Format: cdr\nTopic: /sensing/camera/camera0/image_raw | Type: sensor_msgs/msg/Image | Count: 780 | Serialization Format: cdr\nTopic: /sensing/camera/camera0/camera_info | Type: sensor_msgs/msg/CameraInfo | Count: 780 | Serialization Format: cdr\n
    "},{"location":"how-to-guides/integrating-autoware/creating-vehicle-and-sensor-model/calibrating-sensors/lidar-camera-calibration/#lidar-camera-calibration_1","title":"Lidar-Camera calibration","text":""},{"location":"how-to-guides/integrating-autoware/creating-vehicle-and-sensor-model/calibrating-sensors/lidar-camera-calibration/#creating-launch-files","title":"Creating launch files","text":"

    We start with creating launch file four our vehicle like \"Extrinsic Manual Calibration\" process:

    cd <YOUR-OWN-AUTOWARE-DIRECTORY>/src/autoware/calibration_tools/sensor\ncd extrinsic_calibration_manager/launch\ncd <YOUR-OWN-SENSOR-KIT-NAME> # i.e. for our guide, it will ve cd tutorial_vehicle_sensor_kit which is created in manual calibration\ntouch interactive.launch.xml interactive_sensor_kit.launch.xml\n
    "},{"location":"how-to-guides/integrating-autoware/creating-vehicle-and-sensor-model/calibrating-sensors/lidar-camera-calibration/#modifying-launch-files-according-to-your-sensor-kit","title":"Modifying launch files according to your sensor kit","text":"

    We will be modifying these interactive.launch.xml and interactive_sensor_kit.launch.xml by using TIER IV's sample sensor kit aip_xx1. So, you should copy the contents of these two files from aip_xx1 to your created files.

    Then we will continue with adding vehicle_id and sensor model names to the interactive.launch.xml. (Optionally, values are not important. These parameters will be overridden by launch arguments)

      <arg name=\"vehicle_id\" default=\"default\"/>\n\n  <let name=\"sensor_model\" value=\"aip_x1\"/>\n  <?xml version=\"1.0\" encoding=\"UTF-8\"?>\n  <launch>\n-   <arg name=\"vehicle_id\" default=\"default\"/>\n+   <arg name=\"vehicle_id\" default=\"<YOUR_VEHICLE_ID>\"/>\n+\n-   <arg name=\"sensor_model\" default=\"aip_x1\"/>\n+   <let name=\"sensor_model\" value=\"<YOUR_SENSOR_KIT_NAME>\"/>\n

    If you want to use concatenated pointcloud as an input cloud (the calibration process will initiate the logging simulator, resulting in the construction of the lidar pipeline and the appearance of the concatenated point cloud), you must set use_concatenated_pointcloud value as true.

    -   <arg name=\"use_concatenated_pointcloud\" default=\"false\"/>\n+   <arg name=\"use_concatenated_pointcloud\" default=\"true\"/>\n

    The final version of the file (interactive.launch.xml) for tutorial_vehicle should be like this:

    Sample interactive.launch.xml file for tutorial vehicle
    <?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<launch>\n<arg name=\"vehicle_id\" default=\"tutorial_vehicle\"/>\n<let name=\"sensor_model\" value=\"tutorial_vehicle_sensor_kit\"/>\n<arg name=\"camera_name\"/>\n<arg name=\"rviz\" default=\"false\"/>\n<arg name=\"use_concatenated_pointcloud\" default=\"true\"/>\n\n<group>\n<push-ros-namespace namespace=\"sensor_kit\"/>\n<include file=\"$(find-pkg-share extrinsic_calibration_manager)/launch/$(var sensor_model)/interactive_sensor_kit.launch.xml\" if=\"$(var rviz)\">\n<arg name=\"vehicle_id\" value=\"$(var vehicle_id)\"/>\n<arg name=\"camera_name\" value=\"$(var camera_name)\"/>\n</include>\n</group>\n\n<!-- You can change the config file path -->\n<node pkg=\"rviz2\" exec=\"rviz2\" name=\"rviz2\" output=\"screen\" args=\"\n-d $(find-pkg-share extrinsic_calibration_manager)/config/x2/extrinsic_interactive_calibrator.rviz\" if=\"$(var rviz)\"/>\n</launch>\n

    After the completing of interactive.launch.xml file, we will be ready to implement interactive_sensor_kit.launch.xml for the own sensor model.

    Optionally, (don't forget, these parameters will be overridden by launch arguments) you can modify sensor_kit and vehicle_id as interactive.launch.xmlover this xml snippet. We will set parent_frame for calibration as `sensor_kit_base_link``:

    The default camera input topic of interactive calibrator is compressed image. If you want to use raw image instead of compressed image, you need to update image_topic variable for your camera sensor topic.

        ...\n-   <let name=\"image_topic\" value=\"/sensing/camera/$(var camera_name)/image_raw\"/>\n+   <let name=\"image_topic\" value=\"/sensing/camera/$(var camera_name)/image_compressed\"/>\n   ...\n

    After updating your topic name, you need to add the use_compressed parameter (default value is true) to the interactive_calibrator node with a value of false.

       ...\n   <node pkg=\"extrinsic_interactive_calibrator\" exec=\"interactive_calibrator\" name=\"interactive_calibrator\" output=\"screen\">\n     <remap from=\"pointcloud\" to=\"$(var pointcloud_topic)\"/>\n     <remap from=\"image\" to=\"$(var image_compressed_topic)\"/>\n     <remap from=\"camera_info\" to=\"$(var camera_info_topic)\"/>\n     <remap from=\"calibration_points_input\" to=\"calibration_points\"/>\n+    <param name=\"use_compressed\" value=\"false\"/>\n  ...\n

    Then you can customize pointcloud topic for each camera. For example, if you want to calibrate camera_1 with left lidar, then you should change launch file like this:

        <let name=\"pointcloud_topic\" value=\"/sensing/lidar/top/pointcloud_raw\" if=\"$(eval &quot;'$(var camera_name)' == 'camera0' &quot;)\"/>\n-   <let name=\"pointcloud_topic\" value=\"/sensing/lidar/top/pointcloud_raw\" if=\"$(eval &quot;'$(var camera_name)' == 'camera1' &quot;)\"/>\n+   <let name=\"pointcloud_topic\" value=\"/sensing/lidar/left/pointcloud_raw\" if=\"$(eval &quot;'$(var camera_name)' == 'camera1' &quot;)\"/>\n   <let name=\"pointcloud_topic\" value=\"/sensing/lidar/top/pointcloud_raw\" if=\"$(eval &quot;'$(var camera_name)' == 'camera2' &quot;)\"/>\n    <let name=\"pointcloud_topic\" value=\"/sensing/lidar/top/pointcloud_raw\" if=\"$(eval &quot;'$(var camera_name)' == 'camera3' &quot;)\"/>\n    <let name=\"pointcloud_topic\" value=\"/sensing/lidar/top/pointcloud_raw\" if=\"$(eval &quot;'$(var camera_name)' == 'camera4' &quot;)\"/>\n    <let name=\"pointcloud_topic\" value=\"/sensing/lidar/top/pointcloud_raw\" if=\"$(eval &quot;'$(var camera_name)' == 'camera5' &quot;)\"/>\n    ...\n

    The interactive_sensor_kit.launch.xml launch file for tutorial_vehicle should be this:

    i.e. interactive_sensor_kit.launch.xml for tutorial_vehicle
    <?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<launch>\n<arg name=\"vehicle_id\" default=\"tutorial_vehicle\"/>\n<let name=\"sensor_model\" value=\"tutorial_vehicle_sensor_kit\"/>\n<let name=\"parent_frame\" value=\"sensor_kit_base_link\"/>\n\n<!-- extrinsic_calibration_client -->\n<arg name=\"src_yaml\" default=\"$(find-pkg-share individual_params)/config/$(var vehicle_id)/$(var sensor_model)/sensor_kit_calibration.yaml\"/>\n<arg name=\"dst_yaml\" default=\"$(env HOME)/sensor_kit_calibration.yaml\"/>\n\n\n<arg name=\"camera_name\"/>\n\n<let name=\"image_topic\" value=\"/sensing/camera/$(var camera_name)/image_raw\"/>\n<let name=\"image_topic\" value=\"/sensing/camera/traffic_light/image_raw\" if=\"$(eval &quot;'$(var camera_name)' == 'traffic_light_left_camera' &quot;)\"/>\n\n<let name=\"use_compressed\" value=\"false\"/>\n\n<let name=\"image_compressed_topic\" value=\"/sensing/camera/$(var camera_name)/image_raw/compressed\"/>\n<let name=\"image_compressed_topic\" value=\"/sensing/camera/traffic_light/image_raw/compressed\" if=\"$(eval &quot;'$(var camera_name)' == 'traffic_light_left_camera' &quot;)\"/>\n\n<let name=\"camera_info_topic\" value=\"/sensing/camera/$(var camera_name)/camera_info\"/>\n<let name=\"camera_info_topic\" value=\"/sensing/camera/traffic_light/camera_info\" if=\"$(eval &quot;'$(var camera_name)' == 'traffic_light_left_camera' &quot;)\"/>\n\n<let name=\"pointcloud_topic\" value=\"/sensing/lidar/top/pointcloud_raw\" if=\"$(eval &quot;'$(var camera_name)' == 'camera0' &quot;)\"/>\n<let name=\"pointcloud_topic\" value=\"/sensing/lidar/top/pointcloud_raw\" if=\"$(eval &quot;'$(var camera_name)' == 'camera1' &quot;)\"/>\n<let name=\"pointcloud_topic\" value=\"/sensing/lidar/top/pointcloud_raw\" if=\"$(eval &quot;'$(var camera_name)' == 'camera2' &quot;)\"/>\n<let name=\"pointcloud_topic\" value=\"/sensing/lidar/top/pointcloud_raw\" if=\"$(eval &quot;'$(var camera_name)' == 'camera3' &quot;)\"/>\n<let name=\"pointcloud_topic\" value=\"/sensing/lidar/top/pointcloud_raw\" if=\"$(eval &quot;'$(var camera_name)' == 'camera4' &quot;)\"/>\n<let name=\"pointcloud_topic\" value=\"/sensing/lidar/top/pointcloud_raw\" if=\"$(eval &quot;'$(var camera_name)' == 'camera5' &quot;)\"/>\n<let name=\"pointcloud_topic\" value=\"/sensing/lidar/top/pointcloud_raw\" if=\"$(eval &quot;'$(var camera_name)' == 'traffic_light_left_camera' &quot;)\"/>\n\n<let name=\"calibrate_sensor\" value=\"false\"/>\n<let name=\"calibrate_sensor\" value=\"true\" if=\"$(eval &quot;'$(var camera_name)' == 'camera0' &quot;)\"/>\n<let name=\"calibrate_sensor\" value=\"true\" if=\"$(eval &quot;'$(var camera_name)' == 'camera1' &quot;)\"/>\n<let name=\"calibrate_sensor\" value=\"true\" if=\"$(eval &quot;'$(var camera_name)' == 'camera2' &quot;)\"/>\n<let name=\"calibrate_sensor\" value=\"true\" if=\"$(eval &quot;'$(var camera_name)' == 'camera3' &quot;)\"/>\n<let name=\"calibrate_sensor\" value=\"true\" if=\"$(eval &quot;'$(var camera_name)' == 'camera4' &quot;)\"/>\n<let name=\"calibrate_sensor\" value=\"true\" if=\"$(eval &quot;'$(var camera_name)' == 'camera5' &quot;)\"/>\n<let name=\"calibrate_sensor\" value=\"true\" if=\"$(eval &quot;'$(var camera_name)' == 'traffic_light_left_camera' &quot;)\"/>\n\n<let name=\"camera_frame\" value=\"\"/>\n<let name=\"camera_frame\" value=\"camera0/camera_link\" if=\"$(eval &quot;'$(var camera_name)' == 'camera0' &quot;)\"/>\n<let name=\"camera_frame\" value=\"camera1/camera_link\" if=\"$(eval &quot;'$(var camera_name)' == 'camera1' &quot;)\"/>\n<let name=\"camera_frame\" value=\"camera2/camera_link\" if=\"$(eval &quot;'$(var camera_name)' == 'camera2' &quot;)\"/>\n<let name=\"camera_frame\" value=\"camera3/camera_link\" if=\"$(eval &quot;'$(var camera_name)' == 'camera3' &quot;)\"/>\n<let name=\"camera_frame\" value=\"camera4/camera_link\" if=\"$(eval &quot;'$(var camera_name)' == 'camera4' &quot;)\"/>\n<let name=\"camera_frame\" value=\"camera5/camera_link\" if=\"$(eval &quot;'$(var camera_name)' == 'camera5' &quot;)\"/>\n<let name=\"camera_frame\" value=\"traffic_light_left_camera/camera_link\" if=\"$(eval &quot;'$(var camera_name)' == 'traffic_light_left_camera' &quot;)\"/>\n\n<node pkg=\"extrinsic_calibration_client\" exec=\"extrinsic_calibration_client\" name=\"extrinsic_calibration_client\" output=\"screen\" if=\"$(var calibrate_sensor)\">\n<param name=\"src_path\" value=\"$(var src_yaml)\"/>\n<param name=\"dst_path\" value=\"$(var dst_yaml)\"/>\n</node>\n\n<!-- extrinsic_calibration_manager -->\n<node pkg=\"extrinsic_calibration_manager\" exec=\"extrinsic_calibration_manager\" name=\"extrinsic_calibration_manager\" output=\"screen\" if=\"$(var calibrate_sensor)\">\n<param name=\"parent_frame\" value=\"$(var parent_frame)\"/>\n<param name=\"child_frames\" value=\"\n    [$(var camera_frame)]\"/>\n</node>\n\n<!-- interactive calibrator -->\n<group if=\"$(var calibrate_sensor)\">\n<push-ros-namespace namespace=\"$(var parent_frame)/$(var camera_frame)\"/>\n\n<node pkg=\"extrinsic_interactive_calibrator\" exec=\"interactive_calibrator\" name=\"interactive_calibrator\" output=\"screen\">\n<remap from=\"pointcloud\" to=\"$(var pointcloud_topic)\"/>\n<remap from=\"image\" to=\"$(var image_topic)\"/>\n<remap from=\"camera_info\" to=\"$(var camera_info_topic)\"/>\n<remap from=\"calibration_points_input\" to=\"calibration_points\"/>\n\n<param name=\"camera_parent_frame\" value=\"$(var parent_frame)\"/>\n<param name=\"camera_frame\" value=\"$(var camera_frame)\"/>\n<param name=\"use_compressed\" value=\"$(var use_compressed)\"/>\n</node>\n\n<include file=\"$(find-pkg-share intrinsic_camera_calibration)/launch/optimizer.launch.xml\"/>\n</group>\n</launch>\n
    "},{"location":"how-to-guides/integrating-autoware/creating-vehicle-and-sensor-model/calibrating-sensors/lidar-camera-calibration/#lidar-camera-calibration-process-with-interactive-camera-lidar-calibrator","title":"Lidar-camera calibration process with interactive camera-lidar calibrator","text":"

    After completing interactive.launch.xml and interactive_sensor_kit.launch.xml launch files for own sensor kit; now we are ready to calibrate our lidars. First of all, we need to build extrinsic_calibration_manager package:

    colcon build --symlink-install --cmake-args -DCMAKE_BUILD_TYPE=Release --packages-select extrinsic_calibration_manager\n

    So, we are ready to launch and use interactive lidar-camera calibrator.

    ros2 launch extrinsic_calibration_manager calibration.launch.xml mode:=interactive sensor_model:=<OWN-SENSOR-KIT> vehicle_model:=<OWN-VEHICLE-MODEL> vehicle_id:=<VEHICLE-ID> camera_name:=<CALIBRATION-CAMERA>\n

    For tutorial vehicle:

    ros2 launch extrinsic_calibration_manager calibration.launch.xml mode:=interactive sensor_model:=tutorial_vehicle_sensor_kit vehicle_model:=tutorial_vehicle vehicle_id:=tutorial_vehicle\n

    Then, we need to play our bag file.

    ros2 bag play <rosbag_path> --clock -l -r 0.2 \\\n--remap /tf:=/null/tf /tf_static:=/null/tf_static # if tf is recorded\n

    You will be shown a manual interactive calibrator rqt window and Rviz2. You must add your lidar sensor point cloud to Rviz2, then we can publish points for the calibrator.

    • After that, Let's start by pressing the Publish Point button and selecting points on the point cloud that are also included in the projected image. Then, you need to click on the image point that corresponds to the projected lidar point on the image. You will see matched calibration points.

    • The red points indicate selected lidar points and green ones indicate selected image points. You must match the minimum 6 points to perform calibration. If you have a wrong match, you can remove this match by just clicking on them. After selecting points on image and lidar, you are ready to calibrate. If selected point match size is greater than 6, \"Calibrate extrinsic\" button will be enabled. Click this button and change tf source Initial /tf to Calibrator to see calibration results.

    After the completion of the calibration, you need to save your calibration results via \"Save calibration\" button. The saved format is json, so you need to update calibration params at sensor_kit_calibration.yaml on individual_params and sensor_kit_description packages.

    Sample calibration output
    {\n\"header\": {\n\"stamp\": {\n\"sec\": 1694776487,\n\"nanosec\": 423288443\n},\n\"frame_id\": \"sensor_kit_base_link\"\n},\n\"child_frame_id\": \"camera0/camera_link\",\n\"transform\": {\n\"translation\": {\n\"x\": 0.054564283153017916,\n\"y\": 0.040947512210503106,\n\"z\": -0.071735410952332\n},\n\"rotation\": {\n\"x\": -0.49984112274024817,\n\"y\": 0.4905405357176159,\n\"z\": -0.5086269994990131,\n\"w\": 0.5008267267391722\n}\n},\n\"roll\": -1.5517347113946862,\n\"pitch\": -0.01711459479043047,\n\"yaw\": -1.5694590141484235\n}\n

    Here is the video for demonstrating the lidar-camera calibration process on tutorial_vehicle:

    "},{"location":"how-to-guides/integrating-autoware/creating-vehicle-and-sensor-model/calibrating-sensors/lidar-imu-calibration/","title":"Lidar-Imu Calibration","text":""},{"location":"how-to-guides/integrating-autoware/creating-vehicle-and-sensor-model/calibrating-sensors/lidar-imu-calibration/#lidar-imu-calibration","title":"Lidar-Imu Calibration","text":""},{"location":"how-to-guides/integrating-autoware/creating-vehicle-and-sensor-model/calibrating-sensors/lidar-imu-calibration/#overview","title":"Overview","text":"

    Lidar-Imu calibration is important for localization and mapping algorithms which used in autonomous driving. In this tutorial, we will calibrate the lidar and imu sensors with using OA-LICalib tool which is developed by APRIL Lab at Zhejiang University in China.

    OA-LICalib is calibration method for the LiDAR-Inertial systems within a continuous-time batch optimization, where intrinsic of both sensors, the time offset between sensors and the spatial-temporal extrinsic between sensors are calibrated comprehensively without explicit hand-crafted targets.

    Warning

    This calibration tool is developed with ROS 1, and it is not compatible with ROS 2. So, we are providing a docker image which has ROS 1 and all necessary packages. In the calibration instructions, we will ask you to install docker on your system.

    ROS 2 Bag example of our calibration process for tutorial_vehicle
    Files:             rosbag2_2023_08_18-14_42_12_0.db3\nBag size:          12.4 GiB\nStorage id:        sqlite3\nDuration:          202.140s\nStart:             Aug 18 2023 14:42:12.586 (1692358932.586)\nEnd:               Aug 18 2023 14:45:34.727 (1692359134.727)\nMessages:          22237\nTopic information: Topic: /sensing/gnss/sbg/ros/imu/data | Type: sensor_msgs/msg/Imu | Count: 20215 | Serialization Format: cdr\n                   Topic: /sensing/lidar/top/pointcloud_raw | Type: sensor_msgs/msg/PointCloud2 | Count: 2022 | Serialization Format: cdr\n
    "},{"location":"how-to-guides/integrating-autoware/creating-vehicle-and-sensor-model/calibrating-sensors/lidar-imu-calibration/#data-collection-and-preparation","title":"Data Collection and Preparation","text":"

    For Lidar-Imu calibration, there is a need for a ROS 1 bag file which contains sensor_msgs/PointCloud2 and sensor_msgs/Imu messages. To obtain good results as a result of the calibration process, you need to move the sensors in all 6 axes (x, y, z, roll, pitch, yaw) while collecting data. Therefore, holding the sensors in your hand while data collection will get better results, but you can also collect data on the vehicle. If you are collecting data on the vehicle, you should draw figures of eights and grids.

    Lidar - IMU Calibration Data Collection

    Moreover, the calibration accuracy is affected by the data collection environment. You should collect your data in a place that contains a lot of flat surfaces, and indoor spaces are the best locations under these conditions. However, you can also achieve good results outdoors. When collecting data, make sure to draw figures of eights and grids, capturing data from every angle.

    "},{"location":"how-to-guides/integrating-autoware/creating-vehicle-and-sensor-model/calibrating-sensors/lidar-imu-calibration/#converting-ros-2-bag-to-ros-1-bag","title":"Converting ROS 2 Bag to ROS 1 Bag","text":"

    If you collected your calibration data in ROS 2, you can convert it to ROS 1 bag file with the following instructions:

    • Split your ROS 2 bag file if it contains non-standard message topics (you can only select sensor_msgs/PointCloud2 and sensor_msgs/Imu messages), and convert your split ROS 2 bag file to ROS 1 bag.

    Create a yaml file with name out.yaml which contains your lidar and imu topics:

    output_bags:\n- uri: splitted_bag\ntopics: [/your/imu/topic, /your/pointcloud/topic]\n

    Split your ROS 2 bag file:

    ros2 bag convert -i <YOUR-ROS2-BAG-FOLDER> -o out.yaml\n

    Convert your split ROS 2 bag file to ROS 1 bag file:

    # install bag converter tool (https://gitlab.com/ternaris/rosbags)\npip3 install rosbags\n\n# convert bag\nrosbags-convert <YOUR-SPLITTED-ROS2-BAG-FOLDER> --dst <OUTPUT-BAG-FILE>\n
    "},{"location":"how-to-guides/integrating-autoware/creating-vehicle-and-sensor-model/calibrating-sensors/lidar-imu-calibration/#lidar-imu-calibration_1","title":"Lidar-Imu Calibration","text":"

    As a first step, we need to install docker on our system. You can install docker using this link, or you can use the following commands to install docker using the Apt repository.

    Set up Docker's Apt repository:

    # Add Docker's official GPG key:\nsudo apt-get update\nsudo apt-get install ca-certificates curl gnupg\nsudo install -m 0755 -d /etc/apt/keyrings\ncurl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /etc/apt/keyrings/docker.gpg\nsudo chmod a+r /etc/apt/keyrings/docker.gpg\n\n# Add the repository to Apt sources:\necho \\\n\"deb [arch=\"$(dpkg --print-architecture)\" signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/ubuntu \\\n  \"$(. /etc/os-release && echo \"$VERSION_CODENAME\")\" stable\" | \\\nsudo tee /etc/apt/sources.list.d/docker.list > /dev/null\nsudo apt-get update\n

    Install the Docker packages:

    sudo apt-get install docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin\n

    To check if docker is installed correctly, you can run the following command:

    sudo docker run hello-world\n

    Before finishing the installation, we need to add our user to the docker group. This will allow us to run docker commands without sudo:

    sudo groupadd docker\nsudo usermod -aG docker $USER\n

    Warning

    After running the above command, you need to logout and login again to be able to run docker commands without sudo.

    After installing docker, we are ready to run the calibration tool. As a first step, you should clone the calibration repository:

    git clone https://github.com/leo-drive/OA-LICalib.git\n

    Then, you need to build the docker image:

    cd OA-LICalib/docker\nsudo docker build -t oalicalib .\n

    After building the docker image, you need to create a container from the image:

    Warning

    You need to update REPO_PATH with the path to the cloned repository on your system.

    export REPO_PATH=\"/path/to/OA-LICalib\"\ndocker run -it --env=\"DISPLAY\" --volume=\"$HOME/.Xauthority:/root/.Xauthority:rw\" --volume=\"/tmp/.X11-unix:/tmp/.X11-unix:rw\" --volume=\"$REPO_PATH:/root/catkin_oa_calib/src/OA-LICalib\" oalicalib bash\n

    Before running the calibration tool, you should change some parameters from the configuration file. You can find the configuration file in the OA-LICalib/config

    Change the following parameters in the configuration file as your topics and sensors:

    • These are the lidar model options: VLP_16_packet, VLP_16_points, VLP_32E_points, VLS_128_points, Ouster_16_points, Ouster_32_points, Ouster_64_points, Ouster_128_points, RS_16
    • start_time and end_time are the interval of the rosbag that you want to use
    • path_bag is the path to the rosbag file, but you need to give the path inside the container, not your local system. For example, if you have a rosbag file in the OA-LICalib/data directory, you need to give the path as /root/calib_ws/src/OA-LICalib/data/rosbag2_2023_08_18-14_42_12_0.bag
    topic_lidar: /sensing/lidar/top/pointcloud_raw\ntopic_imu: /sensing/gnss/sbg/ros/imu/data\n\nLidarModel: VLP_16_SIMU\n\nselected_segment:\n- {\n      start_time: 0,\n      end_time: 40,\n      path_bag: /root/calib_ws/src/OA-LICalib/data/rosbag2_2023_08_18-14_42_12_0.bag,\n}\n

    After creating the container and changing parameters, you can build and run the calibration tool:

    cd /root/catkin_oa_calib\ncatkin_make -DCATKIN_WHITELIST_PACKAGES=\"\"\n\nsource devel/setup.bash\nroslaunch oalicalib li_calib.launch\n

    After running the calibration tool, you can track the calibration process with connecting to the container on other terminal. To connect to the container, you can run the following command:

    xhost +local:docker\ndocker exec -it <container_name> bash\n

    Warning

    You need to replace with the name of your container. To see your container name, you can run docker ps command. This command's output should be something like this and you can find your container name in the last column:

    CONTAINER ID   IMAGE      COMMAND                  CREATED         STATUS         PORTS     NAMES\nadb8b559c06e   calib:v1   \"/ros_entrypoint.sh \u2026\"   6 seconds ago   Up 5 seconds             your_awesome_container_name\n

    After connecting to the container, you can see the calibration process with running the Rviz. After running the Rviz, you need to add the following topics to the Rviz:

    • /ndt_odometry/global_map
    • /ndt_odometry/cur_cloud
    rviz\n

    If /ndt_odometry/global_map looks distorted, you should tune ndt parameters in the OA-LICalib/config/simu.yaml file.

    Lidar - IMU Calibration RViz Screen

    To achieve better results, you can tune the parameters in the config/simu.yaml file. The parameters are explained below:

    Parameter Value ndtResolution Resolution of NDT grid structure (VoxelGridCovariance)0,5 for indoor case and 1.0 for outdoor case ndt_key_frame_downsample Resolution parameter for voxel grid downsample function map_downsample_size Resolution parameter for voxel grid downsample function knot_distance time interval plane_motion set true if you collect data from vehicle gyro_weight gyrometer sensor output\u2019s weight for trajectory estimation accel_weight accelerometer sensor output\u2019s weight for trajectory estimation lidar_weight lidar sensor output\u2019s weight for trajectory estimation"},{"location":"how-to-guides/integrating-autoware/creating-vehicle-and-sensor-model/calibrating-sensors/lidar-lidar-calibration/","title":"Lidar-Lidar calibration","text":""},{"location":"how-to-guides/integrating-autoware/creating-vehicle-and-sensor-model/calibrating-sensors/lidar-lidar-calibration/#lidar-lidar-calibration","title":"Lidar-Lidar calibration","text":""},{"location":"how-to-guides/integrating-autoware/creating-vehicle-and-sensor-model/calibrating-sensors/lidar-lidar-calibration/#overview","title":"Overview","text":"

    In this tutorial, we will explain lidar-lidar calibration over mapping-based lidar-lidar calibration tool of TIER IV's CalibrationTools.

    Warning

    Please obtain the initial calibration results from the Manual Calibration section. This is crucial for obtaining accurate results from this tool. We will utilize the initial calibration parameters that were calculated in the previous step of this tutorial. To apply these initial values in the calibration tools, please update your sensor calibration files within the individual parameter package.

    We need a sample bag file for the lidar-lidar calibration process which includes raw lidar topics. Also, we recommend using an outlier-filtered point cloud for mapping because this point cloud includes a cropped vehicle point cloud. Therefore, vehicle points are not included in the map. When you start the bag recording, you should not move the vehicle for the first 5 seconds for better mapping performace. The following shows an example of a bag file used for this calibration:

    ROS 2 Bag example of our calibration process for tutorial_vehicle
    Files:             rosbag2_2023_09_05-11_23_50_0.db3\nBag size:          3.8 GiB\nStorage id:        sqlite3\nDuration:          112.702s\nStart:             Sep  5 2023 11:23:51.105 (1693902231.105)\nEnd:               Sep  5 2023 11:25:43.808 (1693902343.808)\nMessages:          2256\nTopic information: Topic: /sensing/lidar/front/pointcloud_raw | Type: sensor_msgs/msg/PointCloud2 | Count: 1128 | Serialization Format: cdr\n                   Topic: /sensing/lidar/top/pointcloud | Type: sensor_msgs/msg/PointCloud2 | Count: 1128 | Serialization Format: cdr\n
    "},{"location":"how-to-guides/integrating-autoware/creating-vehicle-and-sensor-model/calibrating-sensors/lidar-lidar-calibration/#mapping-based-lidar-lidar-calibration","title":"Mapping-based lidar-lidar calibration","text":""},{"location":"how-to-guides/integrating-autoware/creating-vehicle-and-sensor-model/calibrating-sensors/lidar-lidar-calibration/#creating-launch-files","title":"Creating launch files","text":"

    We start with creating launch file four our vehicle like Extrinsic Manual Calibration process:

    cd <YOUR-OWN-AUTOWARE-DIRECTORY>/src/autoware/calibration_tools/sensor\ncd extrinsic_calibration_manager/launch\ncd <YOUR-OWN-SENSOR-KIT-NAME> # i.e. for our guide, it will ve cd tutorial_vehicle_sensor_kit which is created in manual calibration\ntouch mapping_based.launch.xml mapping_based_sensor_kit.launch.xml\n

    We will be modifying these mapping_based.launch.xml and mapping_based_sensor_kit.launch.xml by using TIER IV's sample sensor kit aip_x1. So, you should copy the contents of these two files from aip_x1 to your created files.

    Then we will continue with adding vehicle_id and sensor model names to the mapping_based.launch.xml: (Optionally, values are not important. These parameters will be overridden by launch arguments)

      <arg name=\"vehicle_id\" default=\"default\"/>\n\n  <let name=\"sensor_model\" value=\"aip_x1\"/>\n+ <?xml version=\"1.0\" encoding=\"UTF-8\"?>\n+ <launch>\n-   <arg name=\"vehicle_id\" default=\"default\"/>\n+   <arg name=\"vehicle_id\" default=\"<YOUR_VEHICLE_ID>\"/>\n+\n-   <arg name=\"sensor_model\" default=\"aip_x1\"/>\n+   <let name=\"sensor_model\" value=\"<YOUR_SENSOR_KIT_NAME>\"/>\n

    The final version of the file (mapping_based.launch.xml) for tutorial_vehicle should be like this:

    Sample mapping_based.launch.xml file for tutorial vehicle
    <?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<launch>\n<arg name=\"vehicle_id\" default=\"tutorial_vehicle\"/>\n<let name=\"sensor_model\" value=\"tutorial_vehicle_sensor_kit\"/>\n<arg name=\"rviz\" default=\"true\"/>\n\n<group>\n<push-ros-namespace namespace=\"sensor_kit\"/>\n<include file=\"$(find-pkg-share extrinsic_calibration_manager)/launch/$(var sensor_model)/mapping_based_sensor_kit.launch.xml\">\n<arg name=\"vehicle_id\" value=\"$(var vehicle_id)\"/>\n<arg name=\"rviz\" value=\"$(var rviz)\"/>\n</include>\n</group>\n</launch>\n

    After the completing of mapping_based.launch.xml file, we will be ready to implement mapping_based_sensor_kit.launch.xml for the own sensor model.

    Optionally, you can modify sensor_kit and vehicle_id as mapping_based.launch.xmlover this xml snippet: (You can change rviz_profile path after the saving rviz config as video which included at the end of the page)

    We will add sensor kit frames for each lidar (except mapping lidar), we have one lidar for pairing to the main lidar sensor for tutorial vehicle, so it should be like:

    Note: The mapping lidar will be used for mapping purposes, but it will not be calibrated. We can consider this lidar as the main sensor for our hardware architecture. Therefore, other lidars will be calibrated with respect to the mapping lidar (main sensor).

    +  <let name=\"lidar_calibration_sensor_kit_frames\" value=\"[\n+  sensor_kit_base_link,\n+  sensor_kit_base_link,\n+  sensor_kit_base_link\n+  ...]\"/>\n

    If you save rviz config file before for the lidar-lidar calibration process:

    - <let name=\"rviz_profile\" value=\"$(find-pkg-share extrinsic_mapping_based_calibrator)/rviz/x1.rviz\"/>\n+ <let name=\"rviz_profile\" value=\"$(find-pkg-share extrinsic_mapping_based_calibrator)/rviz/<YOUR-RVIZ-CONFIG>.rviz\"/>\n
    i.e., If you have one main lidar for mapping, three lidar for calibration
    +  <let name=\"lidar_calibration_sensor_kit_frames\" value=\"[\n+  sensor_kit_base_link,\n+  sensor_kit_base_link,\n+  sensor_kit_base_link]\"/>\n
    i.e., For tutorial_vehicle (one lidar main for mapping, one lidar for calibration)
    +  <let name=\"lidar_calibration_sensor_kit_frames\" value=\"[sensor_kit_base_link]\"/>\n

    We will add lidar_calibration_service_names, calibration_lidar_base_frames and calibration_lidar_frames for calibrator:

    -   <let\n-       name=\"lidar_calibration_service_names\"\n-       value=\"[\n-       /sensor_kit/sensor_kit_base_link/livox_front_left_base_link,\n-       /sensor_kit/sensor_kit_base_link/livox_front_center_base_link,\n-       /sensor_kit/sensor_kit_base_link/livox_front_right_base_link]\"\n-     />\n-   <let name=\"calibration_lidar_base_frames\" value=\"[\n-       livox_front_left_base_link,\n-       livox_front_center_base_link,\n-       livox_front_right_base_link]\"/>\n-   <let name=\"calibration_lidar_frames\" value=\"[\n-       livox_front_left,\n-       livox_front_center,\n-       livox_front_right]\"/>\n+   <let\n+           name=\"lidar_calibration_service_names\"\n+           value=\"[/sensor_kit/sensor_kit_base_link/<YOUR_SENSOR_BASE_LINK>,\n+                   /sensor_kit/sensor_kit_base_link/<YOUR_SENSOR_BASE_LINK\n+                   ...]\"\n+   />\n+\n+   <let name=\"calibration_lidar_base_frames\" value=\"[YOUR_SENSOR_BASE_LINK,\n+                                                     YOUR_SENSOR_BASE_LINK\n+                                                     ...]\"/>\n+   <let name=\"calibration_lidar_frames\" value=\"[YOUR_SENSOR_LINK,\n+                                                YOUR_SENSOR_LINK\n+                                                ...]\"/>\n
    i.e., At the tutorial_vehicle it should be like this snippet
    +   <let\n+           name=\"lidar_calibration_service_names\"\n+           value=\"[/sensor_kit/sensor_kit_base_link/rs_bpearl_front_base_link]\"\n+   />\n\n+   <let name=\"calibration_lidar_base_frames\" value=\"[rs_bpearl_front_base_link]\"/>\n+   <let name=\"calibration_lidar_frames\" value=\"[rs_bpearl_front]\"/>\n

    After that, we will add the sensor topics and sensor frames in order to do that, we will continue filling the mapping_based_sensor_kit.launch.xml with (we recommend using the /sensing/lidar/top/pointcloud topic as the mapping pointcloud because the vehicle cloud is cropped at this topic by pointcloud preprocessing):

    -     <let name=\"mapping_lidar_frame\" value=\"velodyne_top\"/>\n-     <let name=\"mapping_pointcloud\" value=\"/sensing/lidar/top/pointcloud\"/>\n+     <let name=\"mapping_lidar_frame\" value=\"<MAPPING_LIDAR_SENSOR_LINK>\"/>\n+     <let name=\"mapping_pointcloud\" value=\"<MAPPING_LIDAR_POINTCLOUD_TOPIC_NAME>\"/>\n\n\n-     <let name=\"calibration_pointcloud_topics\" value=\"[\n-       /sensing/lidar/front_left/livox/lidar,\n-       /sensing/lidar/front_center/livox/lidar,\n-       /sensing/lidar/front_right/livox/lidar]\"/>\n+     <let name=\"calibration_pointcloud_topics\" value=\"[\n+       <YOUR_LIDAR_TOPIC_FOR_CALIBRATION>,\n+       <YOUR_LIDAR_TOPIC_FOR_CALIBRATION>,\n+       ...]\"/>\n
    At the tutorial_vehicle it should be like this snippet.
      <let name=\"calibration_lidar_base_frames\" value=\"[rs_bpearl_front_base_link]\"/>\n<let name=\"calibration_lidar_frames\" value=\"[rs_bpearl_front]\"/>\n\n<let name=\"mapping_lidar_frame\" value=\"rs_helios_top\"/>\n<let name=\"mapping_pointcloud\" value=\"/sensing/lidar/top/pointcloud_raw\"/>\n<let name=\"detected_objects\" value=\"/perception/object_recognition/detection/objects\"/>\n\n<let name=\"calibration_pointcloud_topics\" value=\"[\n/sensing/lidar/right/pointcloud_raw]\"/>\n

    The mapping_based_sensor_kit.launch.xml launch file for tutorial_vehicle should be this:

    i.e. mapping_based_sensor_kit.launch.xml for tutorial_vehicle
    <?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<launch>\n<arg name=\"vehicle_id\" default=\"tutorial_vehicle\"/>\n<let name=\"sensor_model\" value=\"tutorial_vehicle_sensor_kit\"/>\n\n<arg name=\"rviz\"/>\n<let name=\"rviz_profile\" value=\"$(find-pkg-share extrinsic_mapping_based_calibrator)/rviz/x1.rviz\"/>\n\n<arg name=\"src_yaml\" default=\"$(find-pkg-share individual_params)/config/$(var vehicle_id)/$(var sensor_model)/sensor_kit_calibration.yaml\"/>\n<arg name=\"dst_yaml\" default=\"$(env HOME)/sensor_kit_calibration.yaml\"/>\n\n<let name=\"camera_calibration_service_names\" value=\"['']\"/>\n\n<let name=\"camera_calibration_sensor_kit_frames\" value=\"['']\"/>\n<let name=\"calibration_camera_frames\" value=\"['']\"/>\n<let name=\"calibration_camera_optical_link_frames\" value=\"['']\"/>\n<let name=\"calibration_camera_info_topics\" value=\"['']\"/>\n\n<let name=\"calibration_image_topics\" value=\"['']\"/>\n\n<let name=\"lidar_calibration_sensor_kit_frames\" value=\"[sensor_kit_base_link]\"/>\n\n<let\nname=\"lidar_calibration_service_names\"\nvalue=\"[/sensor_kit/sensor_kit_base_link/rs_bpearl_front_base_link]\"\n/>\n\n<let name=\"calibration_lidar_base_frames\" value=\"[rs_bpearl_front_base_link]\"/>\n<let name=\"calibration_lidar_frames\" value=\"[rs_bpearl_front]\"/>\n\n<let name=\"mapping_lidar_frame\" value=\"rs_helios_top\"/>\n<let name=\"mapping_pointcloud\" value=\"/sensing/lidar/top/pointcloud_raw\"/>\n<let name=\"detected_objects\" value=\"/perception/object_recognition/detection/objects\"/>\n\n<let name=\"calibration_pointcloud_topics\" value=\"[\n/sensing/lidar/right/pointcloud_raw]\"/>\n\n<group>\n<!-- extrinsic_calibration_client -->\n<node pkg=\"extrinsic_calibration_client\" exec=\"extrinsic_calibration_client\" name=\"extrinsic_calibration_client\" output=\"screen\">\n<param name=\"src_path\" value=\"$(var src_yaml)\"/>\n<param name=\"dst_path\" value=\"$(var dst_yaml)\"/>\n</node>\n\n<!-- extrinsic_calibration_manager -->\n<node pkg=\"extrinsic_calibration_manager\" exec=\"extrinsic_calibration_manager\" name=\"extrinsic_calibration_manager\" output=\"screen\">\n<param name=\"parent_frame\" value=\"sensor_kit_base_link\"/>\n<param name=\"child_frames\" value=\"[rs_bpearl_front_base_link]\"/>\n</node>\n</group>\n\n<!-- mapping based calibrator -->\n<include file=\"$(find-pkg-share extrinsic_mapping_based_calibrator)/launch/calibrator.launch.xml\">\n<arg name=\"ns\" value=\"\"/>\n\n<arg name=\"camera_calibration_service_names\" value=\"$(var camera_calibration_service_names)\"/>\n<arg name=\"lidar_calibration_service_names\" value=\"$(var lidar_calibration_service_names)\"/>\n<arg name=\"camera_calibration_sensor_kit_frames\" value=\"$(var camera_calibration_sensor_kit_frames)\"/>\n<arg name=\"lidar_calibration_sensor_kit_frames\" value=\"$(var lidar_calibration_sensor_kit_frames)\"/>\n<arg name=\"calibration_camera_frames\" value=\"$(var calibration_camera_frames)\"/>\n<arg name=\"calibration_camera_optical_link_frames\" value=\"$(var calibration_camera_optical_link_frames)\"/>\n<arg name=\"calibration_lidar_base_frames\" value=\"$(var calibration_lidar_base_frames)\"/>\n<arg name=\"calibration_lidar_frames\" value=\"$(var calibration_lidar_frames)\"/>\n<arg name=\"mapping_lidar_frame\" value=\"$(var mapping_lidar_frame)\"/>\n\n<arg name=\"mapping_pointcloud\" value=\"$(var mapping_pointcloud)\"/>\n<arg name=\"detected_objects\" value=\"$(var detected_objects)\"/>\n\n<arg name=\"calibration_camera_info_topics\" value=\"$(var calibration_camera_info_topics)\"/>\n<arg name=\"calibration_image_topics\" value=\"$(var calibration_image_topics)\"/>\n<arg name=\"calibration_pointcloud_topics\" value=\"$(var calibration_pointcloud_topics)\"/>\n\n<arg name=\"mapping_max_range\" value=\"150.0\"/>\n<arg name=\"local_map_num_keyframes\" value=\"30\"/>\n<arg name=\"dense_pointcloud_num_keyframes\" value=\"20\"/>\n<arg name=\"ndt_resolution\" value=\"0.5\"/>\n<arg name=\"ndt_max_iterations\" value=\"100\"/>\n<arg name=\"ndt_epsilon\" value=\"0.005\"/>\n<arg name=\"lost_frame_max_acceleration\" value=\"15.0\"/>\n<arg name=\"lidar_calibration_max_frames\" value=\"10\"/>\n<arg name=\"calibration_eval_max_corr_distance\" value=\"0.2\"/>\n<arg name=\"solver_iterations\" value=\"100\"/>\n<arg name=\"calibration_skip_keyframes\" value=\"15\"/>\n</include>\n\n<node pkg=\"rviz2\" exec=\"rviz2\" name=\"rviz2\" output=\"screen\" args=\"-d $(var rviz_profile)\" if=\"$(var rviz)\"/>\n</launch>\n
    "},{"location":"how-to-guides/integrating-autoware/creating-vehicle-and-sensor-model/calibrating-sensors/lidar-lidar-calibration/#lidar-lidar-calibration-process-with-interactive-mapping-based-calibrator","title":"Lidar-Lidar calibration process with interactive mapping-based calibrator","text":"

    After completing mapping_based.launch.xml and mapping_based_sensor_kit.launch.xml launch files for own sensor kit; now we are ready to calibrate our lidars. First of all, we need to build extrinsic_calibration_manager package:

    colcon build --symlink-install --cmake-args -DCMAKE_BUILD_TYPE=Release --packages-select extrinsic_calibration_manager\n

    So, we are ready to launch and use mapping-based lidar-lidar calibrator:

    ros2 launch extrinsic_calibration_manager calibration.launch.xml mode:=mapping_based sensor_model:=<OWN-SENSOR-KIT> vehicle_model:=<OWN-VEHICLE-MODEL> vehicle_id:=<VEHICLE-ID>\n

    For tutorial vehicle:

    ros2 launch extrinsic_calibration_manager calibration.launch.xml mode:=mapping_based sensor_model:=tutorial_vehicle_sensor_kit vehicle_model:=tutorial_vehicle vehicle_id:=tutorial_vehicle\n

    You will show the rviz2 screen with several configurations, you need to update it with your sensor information topics like the video, which included an end of the document. Also, you can save the rviz2 config on rviz directory, so you can use it later with modifying mapping_based_sensor_kit.launch.xml.

    extrinsic_ground_plane_calibrator/\n   \u2514\u2500 rviz/\n+        \u2514\u2500 tutorial_vehicle_sensor_kit.rviz\n

    Then play ROS 2 bag file:

    ros2 bag play <rosbag_path> --clock -r 0.2 \\\n--remap /tf:=/null/tf /tf_static:=/null/tf_static # if tf is recorded\n

    The calibration step consists of two phases: mapping and calibration. At the bag starts playing, then mapping starts as well as the rviz2 screenshot below.

    So, red arrow markers indicate poses during mapping, green arrow markers are special poses taken uniformly, and white points indicate the constructed map.

    Mapping halts either upon reaching a predefined data threshold or can be prematurely concluded by invoking this service:

    ros2 service call /NAMESPACE/stop_mapping std_srvs/srv/Empty {}\n

    After the mapping phase of calibration is completed, then the calibration process will start. After the calibration is completed, then you should rviz2 screen like the image below:

    The red points indicate pointcloud that initial calibration results of previous section. The green points indicate aligned point (calibration result). The calibration results will be saved automatically on your dst_yaml ($HOME/sensor_kit_calibration.yaml) at this tutorial.

    Here is the video for demonstrating the mapping-based lidar-lidar calibration process on tutorial_vehicle:

    "},{"location":"how-to-guides/integrating-autoware/creating-vehicle-and-sensor-model/creating-individual-params/","title":"Creating individual params","text":""},{"location":"how-to-guides/integrating-autoware/creating-vehicle-and-sensor-model/creating-individual-params/#creating-individual-params","title":"Creating individual params","text":""},{"location":"how-to-guides/integrating-autoware/creating-vehicle-and-sensor-model/creating-individual-params/#introduction","title":"Introduction","text":"

    The individual_params package is used to define customized sensor calibrations for different vehicles. It lets you define customized sensor calibrations for different vehicles while using the same launch files with the same sensor model.

    Warning

    The \"individual_params\" package contains the calibration results for your sensor kit and overrides the default calibration results found in VEHICLE-ID_sensor_kit_description/config/ directory.

    "},{"location":"how-to-guides/integrating-autoware/creating-vehicle-and-sensor-model/creating-individual-params/#placing-your-individual_parameters-repository-inside-autoware","title":"Placing your individual_parameters repository inside Autoware","text":"

    Previously on this guide, we forked the autoware_individual_params repository to create a tutorial_vehicle_individual_params repository which will be used as an example for this section of the guide. Your individual_parameters repository should be placed inside your Autoware folder following the same folder structure as the one shown below:

    sample folder structure for tutorial_vehicle_individual_params
      <YOUR-OWN-AUTOWARE-DIR>/\n  \u2514\u2500 src/\n  \u2514\u2500 param/\n  \u2514\u2500 tutorial_vehicle_individual_params/\n  \u2514\u2500 individual_params/\n  \u2514\u2500 config/\n  \u251c\u2500 default/\n+ \u2514\u2500 tutorial_vehicle/\n+     \u2514\u2500 tutorial_vehicle_sensor_kit_launch/\n+         \u251c\u2500 imu_corrector.param.yaml\n+         \u251c\u2500 sensor_kit_calibration.yaml\n+         \u2514\u2500 sensors_calibration.yaml\n

    After that, we need to build our individual_params package:

    colcon build --symlink-install --cmake-args -DCMAKE_BUILD_TYPE=Release --packages-up-to individual_params\n

    Now you are ready to use Autoware with a vehicle_id as an argument. For example, if you are several, similar vehicles with different sensor calibration requirements, your autoware_individual_params structure should look like this:

    individual_params/\n\u2514\u2500 config/\n     \u251c\u2500 default/\n     \u2502   \u2514\u2500 <YOUR_SENSOR_KIT>/                  # example1\n     \u2502        \u251c\u2500 imu_corrector.param.yaml\n     \u2502        \u251c\u2500 sensor_kit_calibration.yaml\n     \u2502        \u2514\u2500 sensors_calibration.yaml\n+    \u251c\u2500 VEHICLE_1/\n+    \u2502   \u2514\u2500 <YOUR_SENSOR_KIT>/                  # example2\n+    \u2502        \u251c\u2500 imu_corrector.param.yaml\n+    \u2502        \u251c\u2500 sensor_kit_calibration.yaml\n+    \u2502        \u2514\u2500 sensors_calibration.yaml\n+    \u2514\u2500 VEHICLE_2/\n+         \u2514\u2500 <YOUR_SENSOR_KIT>/                  # example3\n+              \u251c\u2500 imu_corrector.param.yaml\n+              \u251c\u2500 sensor_kit_calibration.yaml\n+              \u2514\u2500 sensors_calibration.yaml\n

    Then, you can use autoware with vehicle_id arguments like this:

    Add a <vehicle_id> as an argument and switch parameters using options at startup.

    # example1 (do not set vehicle_id)\n$ ros2 launch autoware_launch autoware.launch.xml sensor_model:=<YOUR-VEHICLE-NAME>_sensor_kit vehicle_model:=<YOUR-VEHICLE-NAME>_vehicle\n# example2 (set vehicle_id as VEHICLE_1)\n$ ros2 launch autoware_launch autoware.launch.xml sensor_model:=<YOUR-VEHICLE-NAME>_sensor_kit vehicle_model:=<YOUR-VEHICLE-NAME>_vehicle vehicle_id:=VEHICLE_1\n# example3 (set vehicle_id as VEHICLE_2)\n$ ros2 launch autoware_launch autoware.launch.xml sensor_model:=<YOUR-VEHICLE-NAME>_sensor_kit vehicle_model:=<YOUR-VEHICLE-NAME>_vehicle vehicle_id:=VEHICLE_2\n
    "},{"location":"how-to-guides/integrating-autoware/creating-vehicle-and-sensor-model/creating-sensor-model/","title":"Creating a sensor model for Autoware","text":""},{"location":"how-to-guides/integrating-autoware/creating-vehicle-and-sensor-model/creating-sensor-model/#creating-a-sensor-model-for-autoware","title":"Creating a sensor model for Autoware","text":""},{"location":"how-to-guides/integrating-autoware/creating-vehicle-and-sensor-model/creating-sensor-model/#introduction","title":"Introduction","text":"

    This page introduces the following packages for the sensor model:

    1. common_sensor_launch
    2. <YOUR-VEHICLE-NAME>_sensor_kit_description
    3. <YOUR-VEHICLE-NAME>_sensor_kit_launch

    Previously, we forked our sensor model at the creating autoware repositories page step. For instance, we created tutorial_vehicle_sensor_kit_launch as an implementation example for the said step. Please ensure that the _sensor_kit_launch repository is included in Autoware, following the directory structure below:

    <YOUR-OWN-AUTOWARE-DIR>/\n  \u2514\u2500 src/\n       \u2514\u2500 sensor_kit/\n            \u2514\u2500 <YOUR-VEHICLE-NAME>_sensor_kit_launch/\n                 \u251c\u2500 common_sensor_launch/\n                 \u251c\u2500 <YOUR-VEHICLE-NAME>_sensor_kit_description/\n                 \u2514\u2500 <YOUR-VEHICLE-NAME>_sensor_kit_launch/\n

    If your forked Autoware meta-repository doesn't include <YOUR-VEHICLE-NAME>_sensor_kit_launch with the correct folder structure as shown above, please add your forked <YOUR-VEHICLE-NAME>_sensor_kit_launch repository to the autoware.repos file and run the vcs import src < autoware.repos command in your terminal to import the newly included repositories at autoware.repos file.

    Now, we are ready to modify the following sensor model packages for our vehicle. Firstly, we need to rename the description and launch packages:

    <YOUR-VEHICLE-NAME>_sensor_kit_launch/\n  \u251c\u2500 common_sensor_launch/\n- \u251c\u2500 sample_sensor_kit_description/\n+ \u251c\u2500 <YOUR-VEHICLE-NAME>_sensor_kit_description/\n- \u2514\u2500 sample_sensor_kit_launch/\n+ \u2514\u2500 <YOUR-VEHICLE-NAME>_sensor_kit_launch/\n

    After that, we will change our package names in the package.xml file and CMakeLists.txt file of the sample_sensor_kit_description and sample_sensor_kit_launch packages. So, open the package.xml file and CMakeLists.txt file with any text editor or IDE of your preference and perform the following changes:

    Change the <name> attribute at package.xml file:

    <package format=\"3\">\n- <name>sample_sensor_kit_description</name>\n+ <name><YOUR-VEHICLE-NAME>_sensor_kit_description</name>\n <version>0.1.0</version>\n  <description>The sensor_kit_description package</description>\n  ...\n  ...\n

    Change the project() method at CmakeList.txt file.

      cmake_minimum_required(VERSION 3.5)\n- project(sample_sensor_kit_description)\n+ project(<YOUR-VEHICLE-NAME>_sensor_kit_description)\n\n find_package(ament_cmake_auto REQUIRED)\n...\n...\n

    Remember to apply the name changes and project method for BOTH <YOUR-VEHICLE-NAME>_sensor_kit_descriptionand <YOUR-VEHICLE-NAME>_sensor_kit_launch ROS 2 packages. Once finished, we can proceed to build said packages:

    cd <YOUR-AUTOWARE-DIR>\ncolcon build --symlink-install --cmake-args -DCMAKE_BUILD_TYPE=Release --packages-up-to <YOUR-VEHICLE-NAME>_sensor_kit_description <YOUR-VEHICLE-NAME>_sensor_kit_launch\n
    "},{"location":"how-to-guides/integrating-autoware/creating-vehicle-and-sensor-model/creating-sensor-model/#sensor-description","title":"Sensor description","text":"

    The main purpose of this package is to describe the sensor frame IDs, calibration parameters of all sensors, and their links with urdf files.

    The folder structure of sensor_kit_description package is:

    <YOUR-VEHICLE-NAME>_sensor_kit_description/\n   \u251c\u2500 config/\n   \u2502     \u251c\u2500 sensor_kit_calibration.yaml\n   \u2502     \u2514\u2500 sensors_calibration.yaml\n   \u2514\u2500 urdf/\n         \u251c\u2500 sensor_kit.xacro\n         \u2514\u2500 sensors.xacro\n

    Now, we will modify these files according to our sensor design.

    "},{"location":"how-to-guides/integrating-autoware/creating-vehicle-and-sensor-model/creating-sensor-model/#sensor_kit_calibrationyaml","title":"sensor_kit_calibration.yaml","text":"

    This file defines the mounting positions and orientations of sensors with sensor_kit_base_link as the parent frame. We can assume sensor_kit_base_link frame is bottom of your main Lidar sensor. We must create this file with euler format as [x, y, z, roll, pitch, yaw]. Also, we will set these values with \"0\" until the calibration steps.

    We will define new frames for this file, and we will connect them .xacro files. We recommend naming as if your lidar sensor frame as \"velodyne_top\", you can add \"_base_link\" to our calibration .yaml file.

    So, the sample file must be like:

    sensor_kit_base_link:\nvelodyne_top_base_link:\nx: 0.000000\ny: 0.000000\nz: 0.000000\nroll: 0.000000\npitch: 0.000000\nyaw: 0.000000\ncamera0/camera_link:\nx: 0.000000\ny: 0.000000\nz: 0.000000\nroll: 0.000000\npitch: 0.000000\nyaw: 0.000000\n...\n...\n

    This file for tutorial_vehicle was created for one camera, two lidars and one GNSS/INS sensors.

    sensor_kit_calibration.yaml for tutorial_vehicle_sensor_kit_description
    sensor_kit_base_link:\ncamera0/camera_link: # Camera\nx: 0.0\ny: 0.0\nz: 0.0\nroll: 0.0\npitch: 0.0\nyaw: 0.0\nrs_helios_top_base_link: # Lidar\nx: 0.0\ny: 0.0\nz: 0.0\nroll: 0.0\npitch: 0.0\nyaw: 0.0\nrs_bpearl_front_base_link: # Lidar\nx: 0.0\ny: 0.0\nz: 0.0\nroll: 0.0\npitch: 0.0\nyaw: 0.0\nGNSS_INS/gnss_ins_link: # GNSS/INS\nx: 0.0\ny: 0.0\nz: 0.0\nroll: 0.0\npitch: 0.0\nyaw: 0.0\n
    "},{"location":"how-to-guides/integrating-autoware/creating-vehicle-and-sensor-model/creating-sensor-model/#sensors_calibrationyaml","title":"sensors_calibration.yaml","text":"

    This file defines the mounting positions and orientations of sensor_kit_base_link (child frame) with base_link as the parent frame. At Autoware, base_link is on projection of the rear-axle center onto the ground surface. For more information, you can check vehicle dimension page. You can use CAD values for this, but we will fill the values with 0 for now.

    base_link:\nsensor_kit_base_link:\nx: 0.000000\ny: 0.000000\nz: 0.000000\nroll: 0.000000\npitch: 0.000000\nyaw: 0.000000\n

    Now, we are ready to implement .xacro files. These files provide linking our sensor frames and adding sensor urdf files

    "},{"location":"how-to-guides/integrating-autoware/creating-vehicle-and-sensor-model/creating-sensor-model/#sensor_kitxacro","title":"sensor_kit.xacro","text":"

    We will add our sensors and remove unnecessary xacros from this file. For example, we want to add our lidar sensor with velodyne_top frame from the sensor driver, we will add the following xacro to our sensor_kit.xacro file. Please add your sensors to this file and remove unnecessary sensor's xacros.

        <!-- lidar -->\n<xacro:VLS-128 parent=\"sensor_kit_base_link\" name=\"velodyne_top\" topic=\"/points_raw\" hz=\"10\" samples=\"220\" gpu=\"$(arg gpu)\">\n<origin\nxyz=\"${calibration['sensor_kit_base_link']['velodyne_top_base_link']['x']}\n                 ${calibration['sensor_kit_base_link']['velodyne_top_base_link']['y']}\n                 ${calibration['sensor_kit_base_link']['velodyne_top_base_link']['z']}\"\nrpy=\"${calibration['sensor_kit_base_link']['velodyne_top_base_link']['roll']}\n                 ${calibration['sensor_kit_base_link']['velodyne_top_base_link']['pitch']}\n                 ${calibration['sensor_kit_base_link']['velodyne_top_base_link']['yaw']}\"\n/>\n</xacro:VLS-128>\n

    Here is the sample xacro file for tutorial_vehicle with one camera, two lidars and one GNSS/INS sensors.

    sensor_kit.xacro for tutorial_vehicle_sensor_kit_description
    <?xml version=\"1.0\"?>\n<robot xmlns:xacro=\"http://ros.org/wiki/xacro\">\n<xacro:macro name=\"sensor_kit_macro\" params=\"parent x y z roll pitch yaw\">\n<xacro:include filename=\"$(find velodyne_description)/urdf/VLP-16.urdf.xacro\"/>\n<xacro:include filename=\"$(find vls_description)/urdf/VLS-128.urdf.xacro\"/>\n<xacro:include filename=\"$(find camera_description)/urdf/monocular_camera.xacro\"/>\n<xacro:include filename=\"$(find imu_description)/urdf/imu.xacro\"/>\n\n<xacro:arg name=\"gpu\" default=\"false\"/>\n<xacro:arg name=\"config_dir\" default=\"$(find tutorial_vehicle_sensor_kit_description)/config\"/>\n\n<xacro:property name=\"sensor_kit_base_link\" default=\"sensor_kit_base_link\"/>\n\n<joint name=\"${sensor_kit_base_link}_joint\" type=\"fixed\">\n<origin rpy=\"${roll} ${pitch} ${yaw}\" xyz=\"${x} ${y} ${z}\"/>\n<parent link=\"${parent}\"/>\n<child link=\"${sensor_kit_base_link}\"/>\n</joint>\n<link name=\"${sensor_kit_base_link}\">\n<origin rpy=\"0 0 0\" xyz=\"0 0 0\"/>\n</link>\n\n<!-- sensor -->\n<xacro:property name=\"calibration\" value=\"${xacro.load_yaml('$(arg config_dir)/sensor_kit_calibration.yaml')}\"/>\n\n<!-- lidar -->\n<xacro:VLS-128 parent=\"sensor_kit_base_link\" name=\"rs_helios_top\" topic=\"/points_raw\" hz=\"10\" samples=\"220\" gpu=\"$(arg gpu)\">\n<origin\nxyz=\"${calibration['sensor_kit_base_link']['rs_helios_top_base_link']['x']}\n             ${calibration['sensor_kit_base_link']['rs_helios_top_base_link']['y']}\n             ${calibration['sensor_kit_base_link']['rs_helios_top_base_link']['z']}\"\nrpy=\"${calibration['sensor_kit_base_link']['rs_helios_top_base_link']['roll']}\n             ${calibration['sensor_kit_base_link']['rs_helios_top_base_link']['pitch']}\n             ${calibration['sensor_kit_base_link']['rs_helios_top_base_link']['yaw']}\"\n/>\n</xacro:VLS-128>\n<xacro:VLP-16 parent=\"sensor_kit_base_link\" name=\"rs_bpearl_front\" topic=\"/points_raw\" hz=\"10\" samples=\"220\" gpu=\"$(arg gpu)\">\n<origin\nxyz=\"${calibration['sensor_kit_base_link']['rs_bpearl_front_base_link']['x']}\n             ${calibration['sensor_kit_base_link']['rs_bpearl_front_base_link']['y']}\n             ${calibration['sensor_kit_base_link']['rs_bpearl_front_base_link']['z']}\"\nrpy=\"${calibration['sensor_kit_base_link']['rs_bpearl_front_base_link']['roll']}\n             ${calibration['sensor_kit_base_link']['rs_bpearl_front_base_link']['pitch']}\n             ${calibration['sensor_kit_base_link']['rs_bpearl_front_base_link']['yaw']}\"\n/>\n</xacro:VLP-16>\n\n<!-- camera -->\n<xacro:monocular_camera_macro\nname=\"camera0/camera\"\nparent=\"sensor_kit_base_link\"\nnamespace=\"\"\nx=\"${calibration['sensor_kit_base_link']['camera0/camera_link']['x']}\"\ny=\"${calibration['sensor_kit_base_link']['camera0/camera_link']['y']}\"\nz=\"${calibration['sensor_kit_base_link']['camera0/camera_link']['z']}\"\nroll=\"${calibration['sensor_kit_base_link']['camera0/camera_link']['roll']}\"\npitch=\"${calibration['sensor_kit_base_link']['camera0/camera_link']['pitch']}\"\nyaw=\"${calibration['sensor_kit_base_link']['camera0/camera_link']['yaw']}\"\nfps=\"30\"\nwidth=\"800\"\nheight=\"400\"\nfov=\"1.3\"\n/>\n\n<!-- gnss -->\n<xacro:imu_macro\nname=\"gnss\"\nparent=\"sensor_kit_base_link\"\nnamespace=\"\"\nx=\"${calibration['sensor_kit_base_link']['gnss_link']['x']}\"\ny=\"${calibration['sensor_kit_base_link']['gnss_link']['y']}\"\nz=\"${calibration['sensor_kit_base_link']['gnss_link']['z']}\"\nroll=\"${calibration['sensor_kit_base_link']['gnss_link']['roll']}\"\npitch=\"${calibration['sensor_kit_base_link']['gnss_link']['pitch']}\"\nyaw=\"${calibration['sensor_kit_base_link']['gnss_link']['yaw']}\"\nfps=\"100\"\n/>\n\n</xacro:macro>\n</robot>\n
    "},{"location":"how-to-guides/integrating-autoware/creating-vehicle-and-sensor-model/creating-sensor-model/#sensorsxacro","title":"sensors.xacro","text":"

    This files links our sensor_kit main frame (sensor_kit_base_link) to base_link. Also, you have sensors which will be calibrated directly to base_link, you can add it to here.

    Here is the sensors.xacro file for sample_sensor_kit_description package: (velodyne_rear transformation is directly used with base_link)

    <?xml version=\"1.0\"?>\n<robot name=\"vehicle\" xmlns:xacro=\"http://ros.org/wiki/xacro\">\n<xacro:arg name=\"config_dir\" default=\"$(find sample_sensor_kit_description)/config\"/>\n<xacro:property name=\"calibration\" value=\"${xacro.load_yaml('$(arg config_dir)/sensors_calibration.yaml')}\"/>\n\n<!-- sensor kit -->\n<xacro:include filename=\"sensor_kit.xacro\"/>\n<xacro:sensor_kit_macro\nparent=\"base_link\"\nx=\"${calibration['base_link']['sensor_kit_base_link']['x']}\"\ny=\"${calibration['base_link']['sensor_kit_base_link']['y']}\"\nz=\"${calibration['base_link']['sensor_kit_base_link']['z']}\"\nroll=\"${calibration['base_link']['sensor_kit_base_link']['roll']}\"\npitch=\"${calibration['base_link']['sensor_kit_base_link']['pitch']}\"\nyaw=\"${calibration['base_link']['sensor_kit_base_link']['yaw']}\"\n/>\n\n<!-- embedded sensors -->\n<xacro:include filename=\"$(find velodyne_description)/urdf/VLP-16.urdf.xacro\"/>\n<xacro:VLP-16 parent=\"base_link\" name=\"velodyne_rear\" topic=\"velodyne_rear/velodyne_points\" hz=\"10\" samples=\"220\" gpu=\"false\">\n<origin\nxyz=\"${calibration['base_link']['velodyne_rear_base_link']['x']}\n           ${calibration['base_link']['velodyne_rear_base_link']['y']}\n           ${calibration['base_link']['velodyne_rear_base_link']['z']}\"\nrpy=\"${calibration['base_link']['velodyne_rear_base_link']['roll']}\n           ${calibration['base_link']['velodyne_rear_base_link']['pitch']}\n           ${calibration['base_link']['velodyne_rear_base_link']['yaw']}\"\n/>\n</xacro:VLP-16>\n</robot>\n

    At our tutorial vehicle, there is no directly sensor transformation for base_link, thus our sensors.xacro file includes only base_link and sensor_kit_base_link link.

    sensors.xacro for tutorial_vehicle_sensor_kit_description
    <?xml version=\"1.0\"?>\n<robot name=\"vehicle\" xmlns:xacro=\"http://ros.org/wiki/xacro\">\n<xacro:arg name=\"config_dir\" default=\"$(find tutorial_vehicle_sensor_kit_description)/config\"/>\n<xacro:property name=\"calibration\" value=\"${xacro.load_yaml('$(arg config_dir)/sensors_calibration.yaml')}\"/>\n\n<!-- sensor kit -->\n<xacro:include filename=\"sensor_kit.xacro\"/>\n<xacro:sensor_kit_macro\nparent=\"base_link\"\nx=\"${calibration['base_link']['sensor_kit_base_link']['x']}\"\ny=\"${calibration['base_link']['sensor_kit_base_link']['y']}\"\nz=\"${calibration['base_link']['sensor_kit_base_link']['z']}\"\nroll=\"${calibration['base_link']['sensor_kit_base_link']['roll']}\"\npitch=\"${calibration['base_link']['sensor_kit_base_link']['pitch']}\"\nyaw=\"${calibration['base_link']['sensor_kit_base_link']['yaw']}\"\n/>\n</robot>\n

    After the completing sensor_kit_calibration.yaml, sensors_calibration.yaml, sensor_kit.xacro and sensors.xacro file, our sensor description package is finished, we will continue with modifying <YOUR-VEHICLE-NAME>_sensor_kit_launch package.

    "},{"location":"how-to-guides/integrating-autoware/creating-vehicle-and-sensor-model/creating-sensor-model/#sensor-launch","title":"Sensor launch","text":"

    At this package (<YOUR-VEHICLE-NAME>_sensor_kit_launch), we will launch our sensors and their pipelines. So, we will also use common_sensor_launch package for launching the lidar sensing pipeline. This image below demonstrates our sensor pipeline, which we will construct in this section.

    Sample Launch workflow for sensing design.

    The <YOUR-VEHICLE-NAME>_sensor_kit_launch package folder structure like this:

    <YOUR-VEHICLE-NAME>_sensor_kit_launch/\n      \u251c\u2500 config/\n      \u251c\u2500 data/\n      \u2514\u2500 launch/\n+           \u251c\u2500 camera.launch.xml\n+           \u251c\u2500 gnss.launch.xml\n+           \u251c\u2500 imu.launch.xml\n+           \u251c\u2500 lidar.launch.xml\n+           \u251c\u2500 pointcloud_preprocessor.launch.py\n+           \u2514\u2500 sensing.launch.xml\n

    So, we will modify the launch files which located the launch folder for launching and manipulating our sensors. The main launch file is sensing.launch.xml. This launch file launches other sensing launch files. The current autoware sensing launch files design for sensor_kit_launch package is the diagram below.

    Launch file flows over sensing.launch.xml launch file.

    The sensing.launch.xml also launches autoware_vehicle_velocity_converter package for converting autoware_auto_vehicle_msgs::msg::VelocityReport message to geometry_msgs::msg::TwistWithCovarianceStamped for gyro_odometer node. So, be sure your vehicle_interface publishes /vehicle/status/velocity_status topic with autoware_auto_vehicle_msgs::msg::VelocityReport type, or you must update input_vehicle_velocity_topic at sensing.launch.xml.

        ...\n    <include file=\"$(find-pkg-share autoware_vehicle_velocity_converter)/launch/vehicle_velocity_converter.launch.xml\">\n-     <arg name=\"input_vehicle_velocity_topic\" value=\"/vehicle/status/velocity_status\"/>\n+     <arg name=\"input_vehicle_velocity_topic\" value=\"<YOUR-VELOCITY-STATUS-TOPIC>\"/>\n     <arg name=\"output_twist_with_covariance\" value=\"/sensing/vehicle_velocity_converter/twist_with_covariance\"/>\n    </include>\n    ...\n
    "},{"location":"how-to-guides/integrating-autoware/creating-vehicle-and-sensor-model/creating-sensor-model/#lidar-launching","title":"Lidar Launching","text":"

    Let's start with modifying lidar.launch.xml file for launching our lidar sensor driver with autoware. Please check supported lidar sensors over the nebula driver in the GitHub repository.

    If you are using Velodyne Lidar sensor, you can use the sample_sensor_kit_launch template, but you need to update sensor_id, data_port, sensor_frame and other necessary changes (max_range, scan_phase, etc.).

        <group>\n-     <push-ros-namespace namespace=\"left\"/>\n+     <push-ros-namespace namespace=\"<YOUR-SENSOR-NAMESPACE>\"/>\n     <include file=\"$(find-pkg-share common_sensor_launch)/launch/velodyne_VLP16.launch.xml\">\n        <arg name=\"max_range\" value=\"5.0\"/>\n-       <arg name=\"sensor_frame\" value=\"velodyne_left\"/>\n+       <arg name=\"sensor_frame\" value=\"<YOUR-SENSOR-FRAME>\"/>\n-       <arg name=\"sensor_ip\" value=\"192.168.1.202\"/>\n+       <arg name=\"sensor_ip\" value=\"<YOUR-SENSOR-IP>\"/>\n       <arg name=\"host_ip\" value=\"$(var host_ip)\"/>\n-       <arg name=\"data_port\" value=\"2369\"/>\n+       <arg name=\"data_port\" value=<YOUR-DATA-PORT>/>\n       <arg name=\"scan_phase\" value=\"180.0\"/>\n        <arg name=\"cloud_min_angle\" value=\"300\"/>\n        <arg name=\"cloud_max_angle\" value=\"60\"/>\n        <arg name=\"launch_driver\" value=\"$(var launch_driver)\"/>\n        <arg name=\"vehicle_mirror_param_file\" value=\"$(var vehicle_mirror_param_file)\"/>\n        <arg name=\"use_pointcloud_container\" value=\"$(var use_pointcloud_container)\"/>\n        <arg name=\"container_name\" value=\"$(var pointcloud_container_name)\"/>\n      </include>\n    </group>\n

    Please add similar launch groups according to your sensor architecture. For example, we use Robosense Lidars for our tutorial_vehicle, so the lidar group for Robosense Lidar (i.e., for Bpearl) should be like this structure:

        <group>\n<push-ros-namespace namespace=\"<YOUR-SENSOR-NAMESPACE>\"/>\n<include file=\"$(find-pkg-share common_sensor_launch)/launch/robosense_Bpearl.launch.xml\">\n<arg name=\"max_range\" value=\"30.0\"/>\n<arg name=\"sensor_frame\" value=\"<YOUR-ROBOSENSE-SENSOR-FRAME>\"/>\n<arg name=\"sensor_ip\" value=\"<YOUR-ROBOSENSE-SENSOR-IP>\"/>\n<arg name=\"host_ip\" value=\"$(var host_ip)\"/>\n<arg name=\"data_port\" value=\"<YOUR-ROBOSENSE-SENSOR-DATA-PORT>\"/>\n<arg name=\"gnss_port\" value=\"<YOUR-ROBOSENSE-SENSOR-GNSS-PORT>\"/>\n<arg name=\"scan_phase\" value=\"0.0\"/>\n<arg name=\"launch_driver\" value=\"$(var launch_driver)\"/>\n<arg name=\"vehicle_mirror_param_file\" value=\"$(var vehicle_mirror_param_file)\"/>\n<arg name=\"container_name\" value=\"pointcloud_container\"/>\n</include>\n</group>\n

    If you are using a Hesai lidar (i.e. PandarQT64, please check nebula driver page for supported sensors), you can add the group like this structure at lidar.launch.xml:

        <group>\n<push-ros-namespace namespace=\"<YOUR-SENSOR-NAMESPACE>\"/>\n<include file=\"$(find-pkg-share common_sensor_launch)/launch/hesai_PandarQT64.launch.xml\">\n<arg name=\"max_range\" value=\"100\"/>\n<arg name=\"sensor_frame\" value=\"<YOUR-HESAI-SENSOR-FRAME>\"/>\n<arg name=\"sensor_ip\" value=\"<YOUR-HESAI-SENSOR-IP>\"/>\n<arg name=\"host_ip\" value=\"$(var host_ip)\"/>\n<arg name=\"data_port\" value=\"<YOUR-HESAI-SENSOR-DATA-PORT>\"/>\n<arg name=\"scan_phase\" value=\"0.0\"/>\n<arg name=\"cloud_min_angle\" value=\"0\"/>\n<arg name=\"cloud_max_angle\" value=\"360\"/>\n<arg name=\"launch_driver\" value=\"$(var launch_driver)\"/>\n<arg name=\"vehicle_mirror_param_file\" value=\"$(var vehicle_mirror_param_file)\"/>\n<arg name=\"use_pointcloud_container\" value=\"$(var use_pointcloud_container)\"/>\n<arg name=\"container_name\" value=\"$(var pointcloud_container_name)\"/>\n</include>\n</group>\n

    You can create .launch.xml for common sensor launch, please check hesai_PandarQT64.launch.xml as an example.

    The nebula_node_container.py creates the Lidar pipeline for autoware, the pointcloud preprocessing pipeline is constructed for each lidar please check autoware_pointcloud_preprocessor package for filters information as well.

    For example, If you want to change your outlier_filter method, you can modify the pipeline components like this way:

        nodes.append(\n        ComposableNode(\n            package=\"autoware_pointcloud_preprocessor\",\n-           plugin=\"autoware::pointcloud_preprocessor::RingOutlierFilterComponent\",\n-           name=\"ring_outlier_filter\",\n+           plugin=\"autoware::pointcloud_preprocessor::DualReturnOutlierFilterComponent\",\n+           name=\"dual_return_outlier_filter\",\n           remappings=[\n                (\"input\", \"rectified/pointcloud_ex\"),\n                (\"output\", \"pointcloud\"),\n            ],\n            extra_arguments=[{\"use_intra_process_comms\": LaunchConfiguration(\"use_intra_process\")}],\n        )\n    )\n

    We will use the default pointcloud_preprocessor pipeline for our tutorial_vehicle, thus we will not modify nebula_node_container.py.

    "},{"location":"how-to-guides/integrating-autoware/creating-vehicle-and-sensor-model/creating-sensor-model/#camera-launching","title":"Camera Launching","text":"

    In this section, we will launch our camera driver and 2D detection pipeline for Autoware for tutorial_vehicle. The reason we do this is that there is a one computer for tutorial_vehicle. If you are using two or more computers for Autoware, you can launch the camera and 2D detection pipeline separately. For example, you can clone your camera driver at src/sensor_component/external folder (please don't forget adding this driver to autoware.repos file):

    <YOUR-AUTOWARE-DIR>\n   \u2514\u2500 src/\n         \u2514\u2500 sensor_component/\n            \u2514\u2500 external/\n                \u2514\u2500 YOUR-CAMERA-DRIVER\n

    After that, you can just add your camera driver at camera.launch.xml:

        ...\n+   <include file=\"$(find-pkg-share <YOUR-SENSOR-DRIVER>)/launch/YOUR-CAMERA-LAUNCH-FILE\">\n   ...\n

    Then, you can launch tensorrt_yolo node via adding yolo.launch.xml on your design like that: (i.e., it is included in tier4_perception_launch package in autwoare.universe) image_number argument defines your camera number

      <include file=\"$(find-pkg-share tensorrt_yolo)/launch/yolo.launch.xml\">\n<arg name=\"image_raw0\" value=\"$(var image_raw0)\"/>\n<arg name=\"image_raw1\" value=\"$(var image_raw1)\"/>\n<arg name=\"image_raw2\" value=\"$(var image_raw2)\"/>\n<arg name=\"image_raw3\" value=\"$(var image_raw3)\"/>\n<arg name=\"image_raw4\" value=\"$(var image_raw4)\"/>\n<arg name=\"image_raw5\" value=\"$(var image_raw5)\"/>\n<arg name=\"image_raw6\" value=\"$(var image_raw6)\"/>\n<arg name=\"image_raw7\" value=\"$(var image_raw7)\"/>\n<arg name=\"image_number\" value=\"$(var image_number)\"/>\n</include>\n

    Since we are using the same computer for 2D detection pipeline and all Autoware nodes, we will design our camera and 2D detection pipeline using composable nodes and container structure. So, it will decrease our network interface usage. First of all, let's start with our camera sensor: Lucid Vision TRIO54S. We will use this sensor with lucid_vision_driver at the autowarefoundation organization. We can clone this driver src/sensor_component/external folder as well. After the cloning and the building camera driver, we will create \"camera_node_container.launch.py\" for launching camera and tensorrt_yolo node in same container.

    camera_node_container.launch.py launch file for tutorial_vehicle

    ```py import launch from launch.actions import DeclareLaunchArgument from launch.actions import SetLaunchConfiguration from launch.conditions import IfCondition from launch.conditions import UnlessCondition from launch.substitutions.launch_configuration import LaunchConfiguration from launch_ros.actions import ComposableNodeContainer from launch_ros.descriptions import ComposableNode from launch_ros.substitutions import FindPackageShare from launch.actions import OpaqueFunction import yaml

    def launch_setup(context, args, *kwargs):

    output_topic= LaunchConfiguration(\"output_topic\").perform(context)

    image_name = LaunchConfiguration(\"input_image\").perform(context) camera_container_name = LaunchConfiguration(\"camera_container_name\").perform(context) camera_namespace = \"/lucid_vision/\" + image_name

    # tensorrt params gpu_id = int(LaunchConfiguration(\"gpu_id\").perform(context)) mode = LaunchConfiguration(\"mode\").perform(context) calib_image_directory = FindPackageShare(\"tensorrt_yolo\").perform(context) + \"/calib_image/\" tensorrt_config_path = FindPackageShare('tensorrt_yolo').perform(context)+ \"/config/\" + LaunchConfiguration(\"yolo_type\").perform(context) + \".param.yaml\"

    with open(tensorrt_config_path, \"r\") as f: tensorrt_yaml_param = yaml.safe_load(f)[\"/**\"][\"ros__parameters\"]

    camera_param_path=FindPackageShare(\"lucid_vision_driver\").perform(context)+\"/param/\"+image_name+\".param.yaml\" with open(camera_param_path, \"r\") as f: camera_yaml_param = yaml.safe_load(f)[\"/**\"][\"ros__parameters\"]

    container = ComposableNodeContainer( name=camera_container_name, namespace=\"/perception/object_detection\", package=\"rclcpp_components\", executable=LaunchConfiguration(\"container_executable\"), output=\"screen\", composable_node_descriptions=[ ComposableNode( package=\"lucid_vision_driver\", plugin=\"ArenaCameraNode\", name=\"arena_camera_node\", parameters=[{ \"camera_name\": camera_yaml_param['camera_name'], \"frame_id\": camera_yaml_param['frame_id'], \"pixel_format\": camera_yaml_param['pixel_format'], \"serial_no\": camera_yaml_param['serial_no'], \"camera_info_url\": camera_yaml_param['camera_info_url'], \"fps\": camera_yaml_param['fps'], \"horizontal_binning\": camera_yaml_param['horizontal_binning'], \"vertical_binning\": camera_yaml_param['vertical_binning'], \"use_default_device_settings\": camera_yaml_param['use_default_device_settings'], \"exposure_auto\": camera_yaml_param['exposure_auto'], \"exposure_target\": camera_yaml_param['exposure_target'], \"gain_auto\": camera_yaml_param['gain_auto'], \"gain_target\": camera_yaml_param['gain_target'], \"gamma_target\": camera_yaml_param['gamma_target'], \"enable_compressing\": camera_yaml_param['enable_compressing'], \"enable_rectifying\": camera_yaml_param['enable_rectifying'], }], remappings=[ ], extra_arguments=[ {\"use_intra_process_comms\": LaunchConfiguration(\"use_intra_process\")} ], ),

        ComposableNode(\n        namespace='/perception/object_recognition/detection',\n        package=\"tensorrt_yolo\",\n        plugin=\"object_recognition::TensorrtYoloNodelet\",\n        name=\"tensorrt_yolo\",\n        parameters=[\n            {\n                \"mode\": mode,\n                \"gpu_id\": gpu_id,\n                \"onnx_file\": FindPackageShare(\"tensorrt_yolo\").perform(context) +  \"/data/\" + LaunchConfiguration(\"yolo_type\").perform(context) + \".onnx\",\n                \"label_file\": FindPackageShare(\"tensorrt_yolo\").perform(context) + \"/data/\" + LaunchConfiguration(\"label_file\").perform(context),\n                \"engine_file\": FindPackageShare(\"tensorrt_yolo\").perform(context) + \"/data/\"+ LaunchConfiguration(\"yolo_type\").perform(context) + \".engine\",\n                \"calib_image_directory\": calib_image_directory,\n                \"calib_cache_file\": FindPackageShare(\"tensorrt_yolo\").perform(context) + \"/data/\" + LaunchConfiguration(\"yolo_type\").perform(context) + \".cache\",\n                \"num_anchors\": tensorrt_yaml_param['num_anchors'],\n                \"anchors\": tensorrt_yaml_param['anchors'],\n                \"scale_x_y\": tensorrt_yaml_param['scale_x_y'],\n                \"score_threshold\": tensorrt_yaml_param['score_threshold'],\n                \"iou_thresh\": tensorrt_yaml_param['iou_thresh'],\n                \"detections_per_im\": tensorrt_yaml_param['detections_per_im'],\n                \"use_darknet_layer\": tensorrt_yaml_param['use_darknet_layer'],\n                \"ignore_thresh\": tensorrt_yaml_param['ignore_thresh'],\n            }\n        ],\n        remappings=[\n            (\"in/image\", camera_namespace + \"/image_rect\"),\n            (\"out/objects\", output_topic),\n            (\"out/image\", output_topic + \"/debug/image\"),\n        ],\n        extra_arguments=[\n            {\"use_intra_process_comms\": LaunchConfiguration(\"use_intra_process\")}\n        ],\n    ),\n],\n

    ) return [container]

    def generate_launch_description(): launch_arguments = []

    def add_launch_arg(name: str, default_value=None, description=None):\n    # a default_value of None is equivalent to not passing that kwarg at all\n    launch_arguments.append(\n        DeclareLaunchArgument(name, default_value=default_value, description=description)\n    )\nadd_launch_arg(\"mode\",\"\")\nadd_launch_arg(\"input_image\",\"\", description=\"input camera topic\")\nadd_launch_arg(\"camera_container_name\",\"\")\nadd_launch_arg(\"yolo_type\",\"\", description=\"yolo model type\")\nadd_launch_arg(\"label_file\",\"\" ,description=\"tensorrt node label file\")\nadd_launch_arg(\"gpu_id\",\"\", description=\"gpu setting\")\nadd_launch_arg(\"use_intra_process\", \"\", \"use intra process\")\nadd_launch_arg(\"use_multithread\", \"\", \"use multithread\")\n\nset_container_executable = SetLaunchConfiguration(\n    \"container_executable\",\n    \"component_container\",\n    condition=UnlessCondition(LaunchConfiguration(\"use_multithread\")),\n)\n\nset_container_mt_executable = SetLaunchConfiguration(\n    \"container_executable\",\n    \"component_container_mt\",\n    condition=IfCondition(LaunchConfiguration(\"use_multithread\")),\n)\n\nreturn launch.LaunchDescription(\n    launch_arguments\n    + [set_container_executable, set_container_mt_executable]\n    + [OpaqueFunction(function=launch_setup)]\n)\n\n```\n

    The important points for creating camera_node_container.launch.py if you decided to use container for 2D detection pipeline are:

    • Please be careful with design, if you are using multiple cameras, the design must be adaptable for this
    • The tensorrt_yolo node input expects rectified image as input, so if your sensor_driver doesn't support image rectification, you can use image_proc package.
      • You can add something like this in your pipeline for getting rectifying image:
            ...\n        ComposableNode(\n        namespace=camera_ns,\n        package='image_proc',\n        plugin='image_proc::RectifyNode',\n        name='rectify_camera_image_node',\n        # Remap subscribers and publishers\n        remappings=[\n        ('image', camera_ns+\"/image\"),\n        ('camera_info', input_camera_info),\n        ('image_rect', 'image_rect')\n        ],\n        extra_arguments=[\n        {\"use_intra_process_comms\": LaunchConfiguration(\"use_intra_process\")}\n        ],\n        ),\n        ...\n
    • Since lucid_vision_driver supports image_rectification, there is no need to add image_proc for tutorial_vehicle.
    • Please be careful with namespace, for example, we will use /perception/object_detection as tensorrt_yolo node namespace, it will be explained in autoware usage section. For more information, please check image_projection_based_fusion package.

    After the preparing camera_node_container.launch.py to our forked common_sensor_launch package, we need to build the package:

    colcon build --symlink-install --cmake-args -DCMAKE_BUILD_TYPE=Release --packages-up-to common_sensor_launch\n

    Next, we will add camera_node_container.launch.py to camera.launch.xml, we must define necessary tensorrt_yolo parameters like this:

    +  <!--    common parameters -->\n+  <arg name=\"image_0\" default=\"camera_0\" description=\"image raw topic name\"/>\n+  <arg name=\"image_1\" default=\"camera_1\" description=\"image raw topic name\"/>\n  ...\n\n+  <!--    tensorrt params -->\n+  <arg name=\"mode\" default=\"FP32\"/>\n+  <arg name=\"yolo_type\" default=\"yolov3\" description=\"choose image raw number(0-7)\"/>\n+  <arg name=\"label_file\" default=\"coco.names\" description=\"yolo label file\"/>\n+  <arg name=\"gpu_id\" default=\"0\" description=\"choose your gpu id for inference\"/>\n+  <arg name=\"use_intra_process\" default=\"true\"/>\n+  <arg name=\"use_multithread\" default=\"true\"/>\n

    Then, launch camera nodes with these arguments, if you have two or more cameras, you can include it also like this:

    +  <group>\n+    <push-ros-namespace namespace=\"camera\"/>\n+    <include file=\"$(find-pkg-share common_sensor_launch)/launch/camera_node_container.launch.py\">\n+      <arg name=\"mode\" value=\"$(var mode)\"/>\n+      <arg name=\"input_image\" value=\"$(var image_0)\"/>\n+      <arg name=\"camera_container_name\" value=\"front_camera_container\"/>\n+      <arg name=\"yolo_type\" value=\"$(var yolo_type)\"/>\n+      <arg name=\"label_file\" value=\"$(var label_file)\"/>\n+      <arg name=\"gpu_id\" value=\"$(var gpu_id)\"/>\n+      <arg name=\"use_intra_process\" value=\"$(var use_intra_process)\"/>\n+      <arg name=\"use_multithread\" value=\"$(var use_multithread)\"/>\n+      <arg name=\"output_topic\" value=\"camera0/rois0\"/>\n+    </include>\n+    <include file=\"$(find-pkg-share common_sensor_launch)/launch/camera_node_container.launch.py\">\n+      <arg name=\"mode\" value=\"$(var mode)\"/>\n+      <arg name=\"input_image\" value=\"$(var image_1)\"/>\n+      <arg name=\"camera_container_name\" value=\"front_camera_container\"/>\n+      <arg name=\"yolo_type\" value=\"$(var yolo_type)\"/>\n+      <arg name=\"label_file\" value=\"$(var label_file)\"/>\n+      <arg name=\"gpu_id\" value=\"$(var gpu_id)\"/>\n+      <arg name=\"use_intra_process\" value=\"$(var use_intra_process)\"/>\n+      <arg name=\"use_multithread\" value=\"$(var use_multithread)\"/>\n+      <arg name=\"output_topic\" value=\"camera1/rois1\"/>\n+    </include>\n+    ...\n+  </group>\n

    Since there is one camera for tutorial_vehicle, the camera.launch.xml should be like this:

    camera.launch.xml for tutorial_vehicle
    <launch>\n<arg name=\"launch_driver\" default=\"true\"/>\n<!--    common parameters -->\n<arg name=\"image_0\" default=\"camera_0\" description=\"image raw topic name\"/>\n\n<!--    tensorrt params -->\n<arg name=\"mode\" default=\"FP32\"/>\n<arg name=\"yolo_type\" default=\"yolov3\" description=\"choose image raw number(0-7)\"/>\n<arg name=\"label_file\" default=\"coco.names\" description=\"choose image raw number(0-7)\"/>\n<arg name=\"gpu_id\" default=\"0\" description=\"choose image raw number(0-7)\"/>\n<arg name=\"use_intra_process\" default=\"true\"/>\n<arg name=\"use_multithread\" default=\"true\"/>\n\n<group>\n<push-ros-namespace namespace=\"camera\"/>\n<include file=\"$(find-pkg-share common_sensor_launch)/launch/camera_node_container.launch.py\">\n<arg name=\"mode\" value=\"$(var mode)\"/>\n<arg name=\"input_image\" value=\"$(var image_0)\"/>\n<arg name=\"camera_container_name\" value=\"front_camera_container\"/>\n<arg name=\"yolo_type\" value=\"$(var yolo_type)\"/>\n<arg name=\"label_file\" value=\"$(var label_file)\"/>\n<arg name=\"gpu_id\" value=\"$(var gpu_id)\"/>\n<arg name=\"use_intra_process\" value=\"$(var use_intra_process)\"/>\n<arg name=\"use_multithread\" value=\"$(var use_multithread)\"/>\n<arg name=\"output_topic\" value=\"camera0/rois0\"/>\n</include>\n</group>\n</launch>\n

    You can check 2D detection pipeline with launching camera.launch.xml, but we need to build the driver and tensorrt_yolo package first. We will add our sensor driver to sensor_kit_launch's package.xml dependencies.

    + <exec_depend><YOUR-CAMERA-DRIVER-PACKAGE></exec_depend>\n(optionally, if you will launch tensorrt_yolo at here)\n+ <exec_depend>tensorrt_yolo</exec_depend>\n

    Build necessary packages with:

    cd <YOUR-AUTOWARE-DIR>\ncolcon build --symlink-install --cmake-args -DCMAKE_BUILD_TYPE=Release --packages-up-to common_sensor_launch <YOUR-VEHICLE-NAME>_sensor_kit_launch\n

    Then, you can test your camera pipeline:

    ros2 launch <YOUR-SENSOR-KIT-LAUNCH> camera.launch.xml\n# example for tutorial_vehicle: ros2 launch tutorial_vehicle_sensor_kit_launch camera.launch.xml\n

    Then the rois topics will appear, you can check debug image with rviz2 or rqt.

    "},{"location":"how-to-guides/integrating-autoware/creating-vehicle-and-sensor-model/creating-sensor-model/#gnssins-launching","title":"GNSS/INS Launching","text":"

    We will set up the GNSS/INS sensor launches at gnss.launch.xml. The default GNSS sensor options at sample_sensor_kit_launch for u-blox and septentrio is included in gnss.launch.xml, so If we use other sensors as GNSS/INS receiver, we need to add it here. Moreover, gnss_poser package launches here, we will use this package for the pose source of our vehicle at localization initialization but remember, your sensor_driver must provide autoware gnss orientation message for this node. If you are ready with your GNSS/INS driver, you must set navsatfix_topic_name and orientation_topic_name variables at this launch file for gnss_poser arguments. For Example, necessary modifications for should be like this:

      ...\n- <arg name=\"gnss_receiver\" default=\"ublox\" description=\"ublox(default) or septentrio\"/>\n+ <arg name=\"gnss_receiver\" default=\"<YOUR-GNSS-SENSOR>\" description=\"ublox(default), septentrio or <YOUR-GNSS-SENSOR>\"/>\n\n <group>\n    <push-ros-namespace namespace=\"gnss\"/>\n\n    <!-- Switch topic name -->\n    <let name=\"navsatfix_topic_name\" value=\"ublox/nav_sat_fix\" if=\"$(eval &quot;'$(var gnss_receiver)'=='ublox'&quot;)\"/>\n    <let name=\"navsatfix_topic_name\" value=\"septentrio/nav_sat_fix\" if=\"$(eval &quot;'$(var gnss_receiver)'=='septentrio'&quot;)\"/>\n+   <let name=\"navsatfix_topic_name\" value=\"<YOUR-SENSOR>/nav_sat_fix\" if=\"$(eval &quot;'$(var gnss_receiver)'=='<YOUR-GNSS-SENSOR>'&quot;)\"/>\n   <let name=\"orientation_topic_name\" value=\"/autoware_orientation\"/>\n\n    ...\n\n+   <!-- YOUR GNSS Driver -->\n+   <group if=\"$(eval &quot;'$(var launch_driver)' and '$(var gnss_receiver)'=='<YOUR-GNSS-SENSOR>'&quot;)\">\n+     <include file=\"$(find-pkg-share <YOUR-GNSS-SENSOR-DRIVER-PKG>)/launch/<YOUR-GNSS-SENSOR>.launch.xml\"/>\n+   </group>\n   ...\n-   <arg name=\"gnss_frame\" value=\"gnss_link\"/>\n+   <arg name=\"gnss_frame\" value=\"<YOUR-GNSS-SENSOR-FRAME>\"/>\n   ...\n

    Also, you can remove dependencies and unused sensor launch files at gnss.launch.xml. For example, we will use Clap B7 sensor as a GNSS/INS and IMU sensor, and we will use nrtip_client_ros for RTK. Also, we will add these packages to autoware.repos file.

    + sensor_component/external/clap_b7_driver:\n+   type: git\n+   url: https://github.com/Robeff-Technology/clap_b7_driver.git\n+   version: release/autoware\n+ sensor_component/external/ntrip_client_ros :\n+   type: git\n+   url: https://github.com/Robeff-Technology/ntrip_client_ros.git\n+   version: release/humble\n

    So, our gnss.launch.xml for tutorial vehicle should be like this file (Clap B7 includes IMU also, so we will add imu_corrector at this file):

    gnss.launch.xml for tutorial_vehicle
    <launch>\n<arg name=\"launch_driver\" default=\"true\"/>\n\n<group>\n<push-ros-namespace namespace=\"gnss\"/>\n\n<!-- Switch topic name -->\n<let name=\"navsatfix_topic_name\" value=\"/clap/ros/gps_nav_sat_fix\"/>\n<let name=\"orientation_topic_name\" value=\"/clap/autoware_orientation\"/>\n\n<!-- CLAP GNSS Driver -->\n<group if=\"$(eval &quot;'$(var launch_driver)'\">\n<node pkg=\"clap_b7_driver\" exec=\"clap_b7_driver_node\" name=\"clap_b7_driver\" output=\"screen\">\n<param from=\"$(find-pkg-share clap_b7_driver)/config/clap_b7_driver.param.yaml\"/>\n</node>\n<!-- ntrip Client -->\n<include file=\"$(find-pkg-share ntrip_client_ros)/launch/ntrip_client_ros.launch.py\"/>\n</group>\n\n<!-- NavSatFix to MGRS Pose -->\n<include file=\"$(find-pkg-share autoware_gnss_poser)/launch/gnss_poser.launch.xml\">\n<arg name=\"input_topic_fix\" value=\"$(var navsatfix_topic_name)\"/>\n<arg name=\"input_topic_orientation\" value=\"$(var orientation_topic_name)\"/>\n\n<arg name=\"output_topic_gnss_pose\" value=\"pose\"/>\n<arg name=\"output_topic_gnss_pose_cov\" value=\"pose_with_covariance\"/>\n<arg name=\"output_topic_gnss_fixed\" value=\"fixed\"/>\n\n<arg name=\"use_gnss_ins_orientation\" value=\"true\"/>\n<!-- Please enter your gnss frame here -->\n<arg name=\"gnss_frame\" value=\"GNSS_INS/gnss_ins_link\"/>\n</include>\n</group>\n\n<!-- IMU corrector -->\n<group>\n<push-ros-namespace namespace=\"imu\"/>\n<include file=\"$(find-pkg-share autoware_imu_corrector)/launch/imu_corrector.launch.xml\">\n<arg name=\"input_topic\" value=\"/sensing/gnss/clap/ros/imu\"/>\n<arg name=\"output_topic\" value=\"imu_data\"/>\n<arg name=\"param_file\" value=\"$(find-pkg-share individual_params)/config/$(var vehicle_id)/robione_sensor_kit/imu_corrector.param.yaml\"/>\n</include>\n<include file=\"$(find-pkg-share autoware_imu_corrector)/launch/gyro_bias_estimator.launch.xml\">\n<arg name=\"input_imu_raw\" value=\"/sensing/gnss/clap/ros/imu\"/>\n<arg name=\"input_twist\" value=\"/sensing/vehicle_velocity_converter/twist_with_covariance\"/>\n<arg name=\"imu_corrector_param_file\" value=\"$(find-pkg-share individual_params)/config/$(var vehicle_id)/robione_sensor_kit/imu_corrector.param.yaml\"/>\n</include>\n</group>\n</launch>\n
    "},{"location":"how-to-guides/integrating-autoware/creating-vehicle-and-sensor-model/creating-sensor-model/#imu-launching","title":"IMU Launching","text":"

    You can add your IMU sensor launch file at imu.launch.xml file. At the sample_sensor_kit, there is Tamagawa IMU sensor used as a IMU sensor. You can add your IMU driver instead of the Tamagawa IMU driver. Also, we will launch gyro_bias_estimator and imu_corrector at imu.launch.xml file. Please refer these documentations for more information (We added imu_corrector and gyro_bias_estimator at gnss.launch.xml at tutorial_vehicle, so we will not create and use imu.launch.xml for tutorial_vehicle). Please don't forget changing imu_raw_name argument for describing the raw imu topic.

    Here is a sample imu.launch.xml launch file for autoware:

    <launch>\n  <arg name=\"launch_driver\" default=\"true\"/>\n\n  <group>\n    <push-ros-namespace namespace=\"imu\"/>\n\n-     <group>\n-       <push-ros-namespace namespace=\"tamagawa\"/>\n-       <node pkg=\"tamagawa_imu_driver\" name=\"tag_serial_driver\" exec=\"tag_serial_driver\" if=\"$(var launch_driver)\">\n-         <remap from=\"imu/data_raw\" to=\"imu_raw\"/>\n-         <param name=\"port\" value=\"/dev/imu\"/>\n-         <param name=\"imu_frame_id\" value=\"tamagawa/imu_link\"/>\n-       </node>\n-     </group>\n\n+     <group>\n+       <push-ros-namespace namespace=\"<YOUR-IMU_MODEL>\"/>\n+       <node pkg=\"<YOUR-IMU-DRIVER-PACKAGE>\" name=\"<YOUR-IMU-DRIVER>\" exec=\"<YOUR-IMU-DRIVER-EXECUTIBLE>\" if=\"$(var launch_driver)\">\n+       <!-- Add necessary params here -->\n+       </node>\n+     </group>\n\n-   <arg name=\"imu_raw_name\" default=\"tamagawa/imu_raw\"/>\n+   <arg name=\"imu_raw_name\" default=\"<YOUR-IMU_MODEL/YOUR-RAW-IMU-TOPIC>\"/>\n   <arg name=\"imu_corrector_param_file\" default=\"$(find-pkg-share individual_params)/config/$(var vehicle_id)/sample_sensor_kit/imu_corrector.param.yaml\"/>\n    <include file=\"$(find-pkg-share autoware_imu_corrector)/launch/imu_corrector.launch.xml\">\n      <arg name=\"input_topic\" value=\"$(var imu_raw_name)\"/>\n      <arg name=\"output_topic\" value=\"imu_data\"/>\n      <arg name=\"param_file\" value=\"$(var imu_corrector_param_file)\"/>\n    </include>\n\n    <include file=\"$(find-pkg-share autoware_imu_corrector)/launch/gyro_bias_estimator.launch.xml\">\n      <arg name=\"input_imu_raw\" value=\"$(var imu_raw_name)\"/>\n      <arg name=\"input_twist\" value=\"/sensing/vehicle_velocity_converter/twist_with_covariance\"/>\n      <arg name=\"imu_corrector_param_file\" value=\"$(var imu_corrector_param_file)\"/>\n    </include>\n  </group>\n</launch>\n

    Please make necessary modifications on this file according to your IMU driver. Since there is no dedicated IMU sensor on tutorial_vehicle, we will remove their launch in sensing.launch.xml.

    -   <!-- IMU Driver -->\n-   <include file=\"$(find-pkg-share tutorial_vehicle_sensor_kit_launch)/launch/imu.launch.xml\">\n-     <arg name=\"launch_driver\" value=\"$(var launch_driver)\"/>\n-   </include>\n

    You can add or remove launch files in sensing.launch.xml according to your sensor architecture.

    "},{"location":"how-to-guides/integrating-autoware/creating-vehicle-and-sensor-model/creating-vehicle-model/","title":"Creating a vehicle model for Autoware","text":""},{"location":"how-to-guides/integrating-autoware/creating-vehicle-and-sensor-model/creating-vehicle-model/#creating-a-vehicle-model-for-autoware","title":"Creating a vehicle model for Autoware","text":""},{"location":"how-to-guides/integrating-autoware/creating-vehicle-and-sensor-model/creating-vehicle-model/#introduction","title":"Introduction","text":"

    This page introduces the following packages for the vehicle model:

    1. <YOUR-VEHICLE-NAME>_vehicle_description
    2. <YOUR-VEHICLE-NAME>_vehicle_launch

    Previously, we forked our vehicle model at the creating autoware repositories page step. For instance, we created tutorial_vehicle_launch as an implementation example for the said step. Please ensure that the _vehicle_launch repository is included in Autoware, following the directory structure below:

    <YOUR-OWN-AUTOWARE-DIR>/\n  \u2514\u2500 src/\n       \u2514\u2500 vehicle/\n            \u2514\u2500 <YOUR-VEHICLE-NAME>_vehicle_launch/\n                 \u251c\u2500 <YOUR-VEHICLE-NAME>_vehicle_description/\n                 \u2514\u2500 <YOUR-VEHICLE-NAME>_vehicle_launch/\n

    If your forked Autoware meta-repository doesn't include <YOUR-VEHICLE-NAME>_vehicle_launch with the correct folder structure as shown above, please add your forked <YOUR-VEHICLE-NAME>_vehicle_launch repository to the autoware.repos file and run the vcs import src < autoware.repos command in your terminal to import the newly included repositories at autoware.repos file.

    Now, we are ready to modify the following vehicle model packages for our vehicle. Firstly, we need to rename the description and launch packages:

    <YOUR-VEHICLE-NAME>_vehicle_launch/\n- \u251c\u2500 sample_vehicle_description/\n+ \u251c\u2500 <YOUR-VEHICLE-NAME>_vehicle_description/\n- \u2514\u2500 sample_vehicle_launch/\n+ \u2514\u2500 <YOUR-VEHICLE-NAME>_vehicle_launch/\n

    After that, we will change our package names in the package.xml file and CMakeLists.txt file of the sample_vehicle_description and sample_vehicle_launch packages. So, open the package.xml file and CMakeLists.txt file with any text editor or IDE of your preference and perform the following changes:

    Change the <name> attribute at package.xml file:

    <package format=\"3\">\n- <name>sample_vehicle_description</name>\n+ <name><YOUR-VEHICLE-NAME>_vehicle_description</name>\n <version>0.1.0</version>\n  <description>The vehicle_description package</description>\n  ...\n  ...\n

    Change the project() method at CmakeList.txt file.

      cmake_minimum_required(VERSION 3.5)\n- project(sample_vehicle_description)\n+ project(<YOUR-VEHICLE-NAME>_vehicle_description)\n\n find_package(ament_cmake_auto REQUIRED)\n...\n...\n

    Remember to apply the name changes and project method for BOTH <YOUR-VEHICLE-NAME>_vehicle_descriptionand <YOUR-VEHICLE-NAME>_vehicle_launch ROS 2 packages. Once finished, we can proceed to build said packages:

    colcon build --symlink-install --cmake-args -DCMAKE_BUILD_TYPE=Release --packages-up-to <YOUR-VEHICLE-NAME>_vehicle_description <YOUR-VEHICLE-NAME>_vehicle_launch\n
    "},{"location":"how-to-guides/integrating-autoware/creating-vehicle-and-sensor-model/creating-vehicle-model/#vehicle-description","title":"Vehicle description","text":"

    The main purpose of this package is to describe the vehicle dimensions, 3D model of the vehicle, mirror_dimensions of the vehicle, simulator model parameters and URDF of the vehicle.

    The folder structure of vehicle_description package is:

    <YOUR-VEHICLE-NAME>_vehicle_description/\n   \u251c\u2500 config/\n   \u2502     \u251c\u2500 mirror.param.yaml\n   \u2502     \u251c\u2500 simulator_model.param.yaml\n   \u2502     \u2514\u2500 vehicle_info.param.yaml\n   \u251c\u2500 mesh/\n   \u2502     \u251c\u2500 <YOUR-VEHICLE-MESH-FILE>.dae (or .fbx)\n   \u2502     \u251c\u2500 ...\n   \u2514\u2500 urdf/\n         \u2514\u2500 vehicle.xacro\n

    Now, we will modify these files according to our vehicle design.

    "},{"location":"how-to-guides/integrating-autoware/creating-vehicle-and-sensor-model/creating-vehicle-model/#mirrorparamyaml","title":"mirror.param.yaml","text":"

    This file describes your vehicle mirror dimension for CropBox filter of PointCloudPreprocessor. This is important for cropping mirrors from your lidar's point cloud.

    The mirror.param.yaml consist of the following parameters:

    /**:\nros__parameters:\nmin_longitudinal_offset: 0.0\nmax_longitudinal_offset: 0.0\nmin_lateral_offset: 0.0\nmax_lateral_offset: 0.0\nmin_height_offset: 0.0\nmax_height_offset: 0.0\n

    The mirror param file should be filled with this dimension information, please be careful with min_lateral_offsetparameter, it could be negative value like the mirror dimension figure below.

    Dimension demonstration for mirror.param.yaml

    Warning

    Since there is no mirror in tutorial_vehicle, all values set to 0.0. If your vehicle does not have mirror, you can set these values 0.0 as well.

    "},{"location":"how-to-guides/integrating-autoware/creating-vehicle-and-sensor-model/creating-vehicle-model/#simulator_modelparamyaml","title":"simulator_model.param.yaml","text":"

    This file is a configuration file for the simulator environment. Please update these parameters according to your vehicle specifications. For detailed information about variables, please check the simple_planning_simulator package. The file consists of these parameters:

    /**:\nros__parameters:\nsimulated_frame_id: \"base_link\" # center of the rear axle.\norigin_frame_id: \"map\"\nvehicle_model_type: \"DELAY_STEER_ACC_GEARED\" # options: IDEAL_STEER_VEL / IDEAL_STEER_ACC / IDEAL_STEER_ACC_GEARED / DELAY_STEER_ACC / DELAY_STEER_ACC_GEARED\ninitialize_source: \"INITIAL_POSE_TOPIC\" #  options: ORIGIN / INITIAL_POSE_TOPIC\ntimer_sampling_time_ms: 25\nadd_measurement_noise: False # the Gaussian noise is added to the simulated results\nvel_lim: 50.0 # limit of velocity\nvel_rate_lim: 7.0 # limit of acceleration\nsteer_lim: 1.0 # limit of steering angle\nsteer_rate_lim: 5.0 # limit of steering angle change rate\nacc_time_delay: 0.1 # dead time for the acceleration input\nacc_time_constant: 0.1 # time constant of the 1st-order acceleration dynamics\nsteer_time_delay: 0.24 # dead time for the steering input\nsteer_time_constant: 0.27 # time constant of the 1st-order steering dynamics\nx_stddev: 0.0001 # x standard deviation for dummy covariance in map coordinate\ny_stddev: 0.0001 # y standard deviation for dummy covariance in map coordinate\n
    "},{"location":"how-to-guides/integrating-autoware/creating-vehicle-and-sensor-model/creating-vehicle-model/#vehicle_infoparamyaml","title":"vehicle_info.param.yaml","text":"

    This file stores the vehicle dimensions for Autoware modules. Please update it with your vehicle information. You can refer to the vehicle dimensions page for detailed dimension demonstration. Here is the vehicle_info.param.yaml for sample_vehicle:

    /**:\nros__parameters:\nwheel_radius: 0.383 # The radius of the wheel, primarily used for dead reckoning.\nwheel_width: 0.235 # The lateral width of a wheel tire, primarily used for dead reckoning.\nwheel_base: 2.79 # between front wheel center and rear wheel center\nwheel_tread: 1.64 # between left wheel center and right wheel center\nfront_overhang: 1.0 # between front wheel center and vehicle front\nrear_overhang: 1.1 # between rear wheel center and vehicle rear\nleft_overhang: 0.128 # between left wheel center and vehicle left\nright_overhang: 0.128 # between right wheel center and vehicle right\nvehicle_height: 2.5\nmax_steer_angle: 0.70 # [rad]\n

    Please update vehicle_info.param.yaml with your vehicle information.

    "},{"location":"how-to-guides/integrating-autoware/creating-vehicle-and-sensor-model/creating-vehicle-model/#3d-model-of-vehicle","title":"3D model of vehicle","text":"

    You can use .fbx or .dae format as a 3D model with autoware. For the tutorial_vehicle, we exported our 3D model as a .fbx file in the tutorial_vehicle_launch repository. We will set the .fbx file path at vehicle.xacro file.

    "},{"location":"how-to-guides/integrating-autoware/creating-vehicle-and-sensor-model/creating-vehicle-model/#vehiclexacro","title":"vehicle.xacro","text":"

    This .xacro file links the base_link of the vehicle to the 3D mesh. Therefore, we need to make some modifications in this file.

    <?xml version=\"1.0\"?>\n<robot xmlns:xacro=\"http://ros.org/wiki/xacro\">\n  <!-- load parameter -->\n- <xacro:property name=\"vehicle_info\" value=\"${xacro.load_yaml('$(find sample_vehicle_description)/config/vehicle_info.param.yaml')}\"/>\n+ <xacro:property name=\"vehicle_info\" value=\"${xacro.load_yaml('$(find <YOUR-VEHICLE-NAME>_vehicle_description)/config/vehicle_info.param.yaml')}\"/>\n\n <!-- vehicle body -->\n  <link name=\"base_link\">\n    <visual>\n      <origin xyz=\"${vehicle_info['/**']['ros__parameters']['wheel_base']/2.0} 0 0\" rpy=\"${pi/2.0} 0 ${pi}\"/>\n      <geometry>\n-       <mesh filename=\"package://sample_vehicle_description/mesh/lexus.dae\" scale=\"1 1 1\"/>\n+       <mesh filename=\"package://<YOUR-VEHICLE-NAME>_vehicle_description/mesh/<YOUR-3D-MESH-FILE>\" scale=\"1 1 1\"/>\n     </geometry>\n    </visual>\n  </link>\n</robot>\n

    You can also modify roll, pitch, yaw, x, y, z and scale values for the correct position and orientation of the vehicle.

    Please build vehicle_description package after the completion of your _vehicle_description package.

    cd <YOUR-AUTOWARE-DIR>\ncolcon build --symlink-install --cmake-args -DCMAKE_BUILD_TYPE=Release --packages-up-to <YOUR-VEHICLE-NAME>_vehicle_description <YOUR-VEHICLE-NAME>_vehicle_launch\n
    "},{"location":"how-to-guides/integrating-autoware/creating-vehicle-and-sensor-model/creating-vehicle-model/#launching-vehicle-interface","title":"Launching vehicle interface","text":"

    If your vehicle interface is ready, then you can add your vehicle_interface launch file in vehicle_interface.launch.xml. Please check the creating vehicle interface page for more info.

    "},{"location":"how-to-guides/integrating-autoware/creating-vehicle-and-sensor-model/creating-vehicle-model/#launch-planning-simulator-with-your-own-vehicle","title":"Launch planning simulator with your own vehicle","text":"

    After completing the sensor_model, individual_parameters and vehicle model of your vehicle, you are ready to launch the planning simulator with your own vehicle. If you are not sure if every custom package in your Autoware project folder is built, please build all packages:

    cd <YOUR-AUTOWARE-DIR>\ncolcon build --symlink-install --cmake-args -DCMAKE_BUILD_TYPE=Release\n

    To launch the planning simulator, source the install/setup.bash file in your Autoware project folder and run this command in your terminal:

    ros2 launch autoware_launch planning_simulator.launch.xml map_path:=$HOME/Files/autoware_map/sample-map-planning/ vehicle_model:=<YOUR-VEHICLE-MODEL> sensor_model:=<YOUR-SENSOR-KIT> vehicle_id:=<YOUR-VEHICLE-ID>\n

    For example, if we try planning simulator with the tutorial_vehicle:

    ros2 launch autoware_launch planning_simulator.launch.xml map_path:=$HOME/Files/autoware_map/sample-map-planning/ vehicle_model:=tutorial_vehicle sensor_model:=tutorial_vehicle_sensor_kit vehicle_id:=tutorial_vehicle\n

    The planning simulator will open, and you can give an initial pose to your vehicle using 2D Pose Estimate button or by pressing the P key on your keyboard. You can click everywhere for vehicle initialization.

    Our tutorial_vehicle on rviz with TF data"},{"location":"how-to-guides/integrating-autoware/creating-vehicle-interface-package/ackermann-kinematic-model/","title":"Ackermann kinematic model","text":""},{"location":"how-to-guides/integrating-autoware/creating-vehicle-interface-package/ackermann-kinematic-model/#ackermann-kinematic-model","title":"Ackermann kinematic model","text":"

    Autoware now supports control inputs for vehicles based on an Ackermann kinematic model. This section introduces you a brief concept of the Ackermann kinematic model and explains how Autoware controls it. Please remember, Autoware control output (/control/command/control_cmd) publishes lateral and longitudinal commands according to the Ackermann kinematic model.

    • If your vehicle does not suit the Ackermann kinematic model, you have to modify the control commands. Another document gives you an example how to convert your Ackermann kinematic model control inputs into a differential drive model.
    "},{"location":"how-to-guides/integrating-autoware/creating-vehicle-interface-package/ackermann-kinematic-model/#geometry","title":"Geometry","text":"

    The basic style of the Ackermann kinematic model has four wheels with an Ackermann link on the front, and it is powered by the rear wheels. The key point of Ackermann kinematic model is that the axes of all wheels intersect at the same point, which means all wheels will trace a circular trajectory with a different radii but a common center point (See the figure below). Therefore, this model has a great advantage that it minimizes the slippage of the wheels and prevents tires from getting worn soon.

    In general, the Ackermann kinematic model accepts the longitudinal speed \\(v\\) and the steering angle \\(\\phi\\) as inputs. In autoware, \\(\\phi\\) is positive if it is steered counterclockwise, so the steering angle in the figure below is actually negative.

    The basic style of an Ackermann kinematic model. The left figure shows a vehicle facing straight forward, while the right figure shows a vehicle steering to the right."},{"location":"how-to-guides/integrating-autoware/creating-vehicle-interface-package/ackermann-kinematic-model/#control","title":"Control","text":"

    Autoware publishes a ROS 2 topic named control_cmd from several types of publishers. A control_cmd topic is a AckermannControlCommand type message that contains

    AckermannControlCommand
      builtin_interfaces/Time stamp\n  autoware_auto_control_msgs/AckermannLateralCommand lateral\n  autoware_auto_control_msgs/LongitudinalCommand longitudinal\n

    where,

    AckermannLateralCommand
      builtin_interfaces/Time stamp\n  float32 steering_tire_angle\n  float32 steering_tire_rotation_rate\n
    LongitudinalCommand
      builtin_interfaces/Time stamp\n  float32 speed\n  float32 accelaration\n  float32 jerk\n

    See the AckermannLateralCommand.idl and LongitudinalCommand.idl for details.

    The vehicle interface should realize these control commands through your vehicle's control device.

    Moreover, Autoware also provides brake commands, light commands, and more (see vehicle interface design), so the vehicle interface module should be applicable to these commands as long as there are devices available to handle them.

    "},{"location":"how-to-guides/integrating-autoware/creating-vehicle-interface-package/creating-vehicle-interface/","title":"Creating vehicle interface","text":""},{"location":"how-to-guides/integrating-autoware/creating-vehicle-interface-package/creating-vehicle-interface/#creating-vehicle-interface","title":"Creating vehicle interface","text":""},{"location":"how-to-guides/integrating-autoware/creating-vehicle-interface-package/creating-vehicle-interface/#how-to-implement-a-vehicle-interface","title":"How to implement a vehicle interface","text":"

    The following instructions describe how to create a vehicle interface.

    "},{"location":"how-to-guides/integrating-autoware/creating-vehicle-interface-package/creating-vehicle-interface/#1-create-a-directory-for-vehicle-interface","title":"1. Create a directory for vehicle interface","text":"

    It is recommended to create your vehicle interface at <your-autoware-dir>/src/vehicle/external

    cd <your-autoware-dir>/src/vehicle/external\n
    "},{"location":"how-to-guides/integrating-autoware/creating-vehicle-interface-package/creating-vehicle-interface/#2-install-or-implement-your-own-vehicle-interface","title":"2. Install or implement your own vehicle interface","text":"

    If there is an already complete vehicle interface package (like pacmod_interface), you can install it to your environment. If not, you have to implement your own vehicle interface by yourself. Let's create a new package by ros2 pkg create. The following example will show you how to create a vehicle interface package named my_vehicle_interface.

    ros2 pkg create --build-type ament_cmake my_vehicle_interface\n

    Then, you should write your implementation of vehicle interface in my_vehicle_interface/src. Again, since this implementation is so specific to the control device of your vehicle, it is beyond the scope of this document to describe how to implement your vehicle interface in detail. Here are some factors that might be considered:

    • Some necessary topic subscription of control commands topics from Autoware to control your vehicle:
    Topic Name Topic Type Description /control/command/control_cmd autoware_auto_control_msgs/msg/AckermannControlCommand This topic includes main topics for controlling our vehicle like a steering tire angle, speed, acceleration, etc. /control/command/gear_cmd autoware_auto_vehicle_msgs/msg/GearCommand This topic includes gear command for autonomous driving, please check message values to make sense of gears values. Please check the message definition of this type. /control/current_gate_mode tier4_control_msgs/msg/GateMode This topic describes control on the autoware or not. Please check GateMode message type for detailed information. /control/command/emergency_cmd tier4_vehicle_msgs/msg/VehicleEmergencyStamped This topic sends emergency when autoware is on emergency state. Please check VehicleEmergencyStamped message type for detailed information. /control/command/turn_indicators_cmd autoware_auto_vehicle_msgs/msg/TurnIndicatorsCommand This topic indicates a turn signal for your own vehicle. Please check TurnIndicatorsCommand message type for detailed information. /control/command/hazard_lights_cmd autoware_auto_vehicle_msgs/msg/HazardLightsCommand This topic sends command for hazard lights. Please check HazardLightsCommand /control/command/actuation_cmd tier4_vehicle_msgs/msg/ActuationCommandStamped This topic is enabled when you use raw_vehicle_command_converter for control your vehicle with TYPE B which we mentioned at Vehicle interface section. In summary, if you are using Type B on your vehicle, this topic appeared and included with gas, brake, steering-wheel actuation commands. Please check ActuationCommandStamped message type for detailed information. etc. etc. etc.
    • Some necessary topic publication of vehicle status topics from vehicle interface to Autoware:
    Topic Name Topic Type Description /vehicle/status/battery_charge tier4_vehicle_msgs/msg/BatteryStatus This topic includes battery information. Please check BatteryStatus message type for detailed information. You can use this value as describing fuel level, etc. /vehicle/status/control_mode autoware_auto_vehicle_msgs/msg/ControlModeReport This topic describes the current control mode of vehicle. Please check ControlModeReport message type for detailed information. /vehicle/status/gear_status autoware_auto_vehicle_msgs/msg/GearReport This topic includes the current gear status of the vehicle. Please check GearReport message type for detailed information. /vehicle/status/hazard_lights_status autoware_auto_vehicle_msgs/msg/HazardLightsReport This topic describes hazard light status of the vehicle. Please check HazardLightsReport message type for detailed information. /vehicle/status/turn_indicators_status autoware_auto_vehicle_msgs/msg/TurnIndicatorsReport This topic reports the turn indicators status of the vehicle. Please check TurnIndicatorsReport message type for detailed information. /vehicle/status/steering_status autoware_auto_vehicle_msgs/msg/SteeringReport This topic reports the steering status of the vehicle. Please check SteeringReport message type for detailed information. /vehicle/status/velocity_status autoware_auto_vehicle_msgs/msg/VelocityReport This topic gives us the velocity status of the vehicle. Please check VelocityReport message type for detailed information. etc. etc. etc.

    This diagram as an example for communication of vehicle interface and autoware with describing sample topics and message types.

    Sample demonstration of vehicle and autoware communication. There are some topics and types included in this diagram and it can be changed your desired control command or autoware updates.

    You must create a subscriber and publisher with these topics on your vehicle interface. Let's explain with the simple demonstration of subscribing /control/command/control_cmd and publishing /vehicle/status/gear_status topics.

    So, your YOUR-OWN-VEHICLE-INTERFACE.hpp header file should be like this:

    ...\n#include <autoware_auto_control_msgs/msg/ackermann_control_command.hpp>\n#include <autoware_auto_vehicle_msgs/msg/gear_report.hpp>\n...\n\nclass <YOUR-OWN-INTERFACE> : public rclcpp::Node\n{\npublic:\n...\nprivate:\n...\n// from autoware\nrclcpp::Subscription<autoware_auto_control_msgs::msg::AckermannControlCommand>::SharedPtr\ncontrol_cmd_sub_;\n...\n// from vehicle\nrclcpp::Publisher<autoware_auto_vehicle_msgs::msg::GearReport>::SharedPtr gear_status_pub_;\n...\n// autoware command messages\n...\nautoware_auto_control_msgs::msg::AckermannControlCommand::ConstSharedPtr control_cmd_ptr_;\n...\n// callbacks\n...\nvoid callback_control_cmd(\nconst autoware_auto_control_msgs::msg::AckermannControlCommand::ConstSharedPtr msg);\n...\nvoid to_vehicle();\nvoid from_vehicle();\n}\n

    And your YOUR-OWN-VEHICLE-INTERFACE.cpp .cpp file should be like this:

    #include <YOUR-OWN-VEHICLE-INTERFACE>/<YOUR-OWN-VEHICLE-INTERFACE>.hpp>\n...\n\n<YOUR-OWN-VEHICLE-INTERFACE>::<YOUR-OWN-VEHICLE-INTERFACE>()\n: Node(\"<YOUR-OWN-VEHICLE-INTERFACE>\")\n{\n...\n/* subscribers */\nusing std::placeholders::_1;\n// from autoware\ncontrol_cmd_sub_ = create_subscription<autoware_auto_control_msgs::msg::AckermannControlCommand>(\n\"/control/command/control_cmd\", 1, std::bind(&<YOUR-OWN-VEHICLE-INTERFACE>::callback_control_cmd, this, _1));\n...\n// to autoware\ngear_status_pub_ = create_publisher<autoware_auto_vehicle_msgs::msg::GearReport>(\n\"/vehicle/status/gear_status\", rclcpp::QoS{1});\n...\n}\n\nvoid <YOUR-OWN-VEHICLE-INTERFACE>::callback_control_cmd(\nconst autoware_auto_control_msgs::msg::AckermannControlCommand::ConstSharedPtr msg)\n{\ncontrol_cmd_ptr_ = msg;\n}\n\nvoid <YOUR-OWN-VEHICLE-INTERFACE>::to_vehicle()\n{\n...\n// you should implement this structure according to your own vehicle design\ncontrol_command_to_vehicle(control_cmd_ptr_);\n...\n}\n\nvoid <YOUR-OWN-VEHICLE-INTERFACE>::to_autoware()\n{\n...\n// you should implement this structure according to your own vehicle design\nautoware_auto_vehicle_msgs::msg::GearReport gear_report_msg;\nconvert_gear_status_to_autoware_msg(gear_report_msg);\ngear_status_pub_->publish(gear_report_msg);\n...\n}\n
    • Modification of control values if needed
      • In some cases, you may need to modify the control commands. For example, Autoware expects vehicle velocity information in m/s units, but if your vehicle publishes it in a different format (i.e., km/h), you must convert it before sending it to Autoware.
    "},{"location":"how-to-guides/integrating-autoware/creating-vehicle-interface-package/creating-vehicle-interface/#3-prepare-a-launch-file","title":"3. Prepare a launch file","text":"

    After you implement your vehicle interface, or you want to debug it by launching it, create a launch file of your vehicle interface, and include it to vehicle_interface.launch.xml which included in <VEHICLE_ID>_vehicle_launch package that we forked and created at creating vehicle and sensor model page.

    Do not get confused. First, you need to create a launch file for your own vehicle interface module (like my_vehicle_interface.launch.xml) and then include that to vehicle_interface.launch.xml which exists in another directory. Here are the details.

    1. Add a launch directory in the my_vehicle_interface directory, and create a launch file of your own vehicle interface in it. Take a look at Creating a launch file in the ROS 2 documentation.

    2. Include your launch file which is created for vehicle_interface to <YOUR-VEHICLE-NAME>_launch/<YOUR-VEHICLE-NAME>_launch/launch/vehicle_interface.launch.xml by opening it and add the included terms like below.

    vehicle_interface.launch.xml
    <?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<launch>\n<arg name=\"vehicle_id\" default=\"$(env VEHICLE_ID default)\"/>\n<!-- please add your created vehicle interface launch file -->\n<include file=\"$(find-pkg-share my_vehicle_interface)/launch/my_vehicle_interface.launch.xml\">\n</include>\n</launch>\n

    Finally, your directory structure may look like below. Most of the files are omitted for clarity, but the files shown here needs modification as said in the previous and current process.

    <your-autoware-dir>/\n\u2514\u2500 src/\n    \u2514\u2500 vehicle/\n        \u251c\u2500 external/\n+       \u2502   \u2514\u2500 <YOUR-VEHICLE-NAME>_interface/\n+       \u2502       \u251c\u2500 src/\n+       \u2502       \u2514\u2500 launch/\n+       \u2502            \u2514\u2500 my_vehicle_interface.launch.xml\n+       \u2514\u2500 <YOUR-VEHICLE-NAME>_launch/ (COPIED FROM sample_vehicle_launch)\n+           \u251c\u2500 <YOUR-VEHICLE-NAME>_launch/\n+           \u2502  \u251c\u2500 launch/\n+           \u2502  \u2502  \u2514\u2500 vehicle_interface.launch.xml\n+           \u2502  \u251c\u2500 CMakeLists.txt\n+           \u2502  \u2514\u2500 package.xml\n+           \u251c\u2500 <YOUR-VEHICLE-NAME>_description/\n+           \u2502  \u251c\u2500 config/\n+           \u2502  \u251c\u2500 mesh/\n+           \u2502  \u251c\u2500 urdf/\n+           \u2502  \u2502  \u2514\u2500 vehicle.xacro\n+           \u2502  \u251c\u2500 CMakeLists.txt\n+           \u2502  \u2514\u2500 package.xml\n+           \u2514\u2500 README.md\n
    "},{"location":"how-to-guides/integrating-autoware/creating-vehicle-interface-package/creating-vehicle-interface/#4-build-the-vehicle-interface-package-and-the-launch-package","title":"4. Build the vehicle interface package and the launch package","text":"

    Build three packages my_vehicle_interface, <YOUR-VEHICLE-NAME>_launch and <YOUR-VEHICLE-NAME>_description by colcon build, or you can just build the entire Autoware if you have done other things.

    colcon build --symlink-install --cmake-args -DCMAKE_BUILD_TYPE=Release --packages-select my_vehicle_interface <YOUR-VEHICLE-NAME>_launch <YOUR-VEHICLE-NAME>_description\n
    "},{"location":"how-to-guides/integrating-autoware/creating-vehicle-interface-package/creating-vehicle-interface/#5-when-you-launch-autoware","title":"5. When you launch Autoware","text":"

    Finally, you are done implementing your vehicle interface module! Be careful that you need to launch Autoware with the proper vehicle_model option like the example below. This example is launching planning simulator.

    ros2 launch autoware_launch planning.launch.xml map_path:=$HOME/autoware_map/sample-map-planning vehicle_model:=<YOUR-VEHICLE-NAME> sensor_model:=<YOUR-VEHICLE-NAME>_sensor_kit\n
    "},{"location":"how-to-guides/integrating-autoware/creating-vehicle-interface-package/creating-vehicle-interface/#tips","title":"Tips","text":"

    There are some tips that may help you.

    • You can subdivide your vehicle interface into smaller packages if you want. Then your directory structure may look like below (not the only way though). Do not forget to launch all packages in my_vehicle_interface.launch.xml.

      <your-autoware-dir>/\n\u2514\u2500 src/\n    \u2514\u2500 vehicle/\n        \u251c\u2500 external/\n        \u2502   \u2514\u2500 my_vehicle_interface/\n        \u2502       \u251c\u2500 src/\n        \u2502       \u2502   \u251c\u2500 package1/\n        \u2502       \u2502   \u251c\u2500 package2/\n        \u2502       \u2502   \u2514\u2500 package3/\n        \u2502       \u2514\u2500 launch/\n        \u2502            \u2514\u2500 my_vehicle_interface.launch.xml\n        \u251c\u2500 sample_vehicle_launch/\n        \u2514\u2500 my_vehicle_name_launch/\n
    • If you are using a vehicle interface and launch package from a open git repository, or created your own as a git repository, it is highly recommended to add those repositories to your autoware.repos file which is located to directly under your autoware folder like the example below. You can specify the branch or commit hash by the version tag.

      autoware.repos
      # vehicle (this section should be somewhere in autoware.repos and add the below)\nvehicle/external/my_vehicle_interface:\ntype: git\nurl: https://github.com/<repository-name-B>/my_vehicle_interface.git\nversion: main\n

      Then you can import your entire environment easily to another local device by using the vcs import command. (See the source installation guide)

    "},{"location":"how-to-guides/integrating-autoware/creating-vehicle-interface-package/customizing-for-differential-drive-model/","title":"Customizing for differential drive vehicle","text":""},{"location":"how-to-guides/integrating-autoware/creating-vehicle-interface-package/customizing-for-differential-drive-model/#customizing-for-differential-drive-vehicle","title":"Customizing for differential drive vehicle","text":""},{"location":"how-to-guides/integrating-autoware/creating-vehicle-interface-package/customizing-for-differential-drive-model/#1-introduction","title":"1. Introduction","text":"

    Currently, Autoware assumes that vehicles use an Ackermann kinematic model with Ackermann steering. Thus, Autoware adopts the Ackermann command format for the Control module's output (see the AckermannDrive ROS message definition for an overview of Ackermann commands, and the AckermannControlCommands struct used in Autoware for more details).

    However, it is possible to integrate Autoware with a vehicle that follows a differential drive kinematic model, as commonly used by small mobile robots. The differential vehicles can be either four-wheel or two-wheel, as described in the figure below.

    Sample differential vehicles with four-wheel and two-wheel models."},{"location":"how-to-guides/integrating-autoware/creating-vehicle-interface-package/customizing-for-differential-drive-model/#2-procedure","title":"2. Procedure","text":"

    One simple way of using Autoware with a differential drive vehicle is to create a vehicle_interface package that translates Ackermann commands to differential drive commands. Here are two points that you need to consider:

    • Create vehicle_interface package for differential drive vehicle
    • Set an appropriate wheel_base
    "},{"location":"how-to-guides/integrating-autoware/creating-vehicle-interface-package/customizing-for-differential-drive-model/#21-create-a-vehicle_interface-package-for-differential-drive-vehicle","title":"2.1 Create a vehicle_interface package for differential drive vehicle","text":"

    An Ackermann command in Autoware consists of two main control inputs:

    • steering angle (\\(\\omega\\))
    • velocity (\\(v\\))

    Conversely, a typical differential drive command consists of the following inputs:

    • left wheel velocity (\\(v_l\\))
    • right wheel velocity (\\(v_r\\))

    So, one way in which an Ackermann command can be converted to a differential drive command is by using the following equations:

    \\[ v_l = v - \\frac{l\\omega}{2}, v_r = v + \\frac{l\\omega}{2} \\]

    where \\(l\\) denotes wheel tread.

    Here is the example .cpp snippet for converting ackermann model kinematics to a differential model:

    ...\nvoid convert_ackermann_to_differential(\nautoware_auto_control_msgs::msg::AckermannControlCommand & ackermann_msg\nmy_vehicle_msgs::msg::DifferentialCommand & differential_command)\n{\ndifferential_command.left_wheel.velocity =\nackermann_msg.longitudinal.speed - (ackermann_msg.lateral.steering_tire_angle * my_wheel_tread) / 2;\ndifferential_command.right_wheel.velocity =\nackermann_msg.longitudinal.speed + (ackermann_msg.lateral.steering_tire_angle * my_wheel_tread) / 2;\n}\n...\n

    For information about other factors that need to be considered when creating a vehicle_interface package, refer to the creating vehicle_interface page.

    "},{"location":"how-to-guides/integrating-autoware/creating-vehicle-interface-package/customizing-for-differential-drive-model/#22-set-an-appropriate-wheel_base","title":"2.2 Set an appropriate wheel_base","text":"

    A differential drive robot does not necessarily have front and rear wheels, which means that the wheelbase (the horizontal distance between the axles of the front and rear wheels) cannot be defined. However, Autoware expects wheel_base to be set in vehicle_info.param.yaml with some value. Thus, you need to set a pseudo value for wheel_base.

    The appropriate pseudo value for wheel_base depends on the size of your vehicle. Setting it to be the same value as wheel_tread is one possible choice.

    Warning

    • If the wheel_base value is set too small then the vehicle may behave unexpectedly. For example, the vehicle may drive beyond the bounds of a calculated path.
    • Conversely, if wheel_base is set too large, the vehicle's range of motion will be restricted. The reason being that Autoware's Planning module will calculate an overly conservative trajectory based on the assumed vehicle length.
    "},{"location":"how-to-guides/integrating-autoware/creating-vehicle-interface-package/customizing-for-differential-drive-model/#3-known-issues","title":"3. Known issues","text":""},{"location":"how-to-guides/integrating-autoware/creating-vehicle-interface-package/customizing-for-differential-drive-model/#motion-model-incompatibility","title":"Motion model incompatibility","text":"

    Since Autoware assumes that vehicles use a steering system, it is not possible to take advantage of the flexibility of a differential drive system's motion model.

    For example, when planning a parking maneuver with the freespace_planner module, Autoware may drive the differential drive vehicle forward and backward, even if the vehicle can be parked with a simpler trajectory that uses pure rotational movement.

    "},{"location":"how-to-guides/integrating-autoware/creating-vehicle-interface-package/vehicle-interface/","title":"Vehicle interface overview","text":""},{"location":"how-to-guides/integrating-autoware/creating-vehicle-interface-package/vehicle-interface/#vehicle-interface","title":"Vehicle interface","text":""},{"location":"how-to-guides/integrating-autoware/creating-vehicle-interface-package/vehicle-interface/#what-is-the-vehicle-interface","title":"What is the vehicle interface?","text":"

    The purpose of the vehicle interface package is to convert the control messages calculated from the autoware into a form that the vehicle can understand and transmit them to the vehicle (CAN, Serial message, etc.) and to decode the data coming from the vehicle and publish it through the ROS 2 topics that the autoware expects. So, we can say that the purposes of the vehicle_interface are:

    1. Converting Autoware control commands to a vehicle-specific format. Example control commands of autoware:

      • lateral controls: steering tire angle, steering tire rotation rate
      • longitudinal controls: speed, acceleration, jerk
    2. Converting vehicle status information in a vehicle-specific format to Autoware messages.

    Autoware publishes control commands such as:

    • Velocity control
    • Steering control
    • Car light commands
    • etc.

    Then, the vehicle interface converts these commands into actuation such like:

    • Motor and brake activation
    • Steering-wheel operation
    • Lighting control
    • etc.

    So think of the vehicle interface as a module that runs the vehicle's control device to realize the input commands provided by Autoware.

    An example of inputs and outputs for vehicle interface

    There are two types of interfaces for controlling your own vehicle:

    1. Target steering and target velocity/acceleration interface. (It will be named as Type A for this document)
    2. Generalized target command interface (i.e., accel/brake pedal, steering torque). (It will be named as Type B for this document)

    For Type A, where the control interface encompasses target steering and target velocity/acceleration.

    • So, at this type, speed control is occurred by desired velocity or acceleration.
    • Steering control is occurred by desired steering angle and/or steering angle velocity.

    On the other hand, Type B, characterized by a generalized target command interface (e.g., accel/brake pedal, steering torque), introduces a more dynamic and adaptable control scheme. In this configuration, the vehicle controlled to direct input from autoware actuation commands which output of the raw_vehicle_cmd_converter, allowing for a more intuitive and human-like driving experience.

    If you use your own vehicle like this way (i.e., controlling gas and brake pedal), you need to use raw_vehicle_cmd_converter package to convert autoware output control cmd to brake, gas and steer map. In order to do that, you will need brake, gas and steering calibration. So, you can get the calibration with using accel_brake_map_calibrator package. Please follow the steps for calibration your vehicle actuation.

    The choice between these control interfaces profoundly influences the design and development process. If you are planning to use Type A, then you will control velocity or acceleration over drive by-wire systems. If type B is more suitable for your implementation, then you will need to control your vehicle's brake and gas pedal.

    "},{"location":"how-to-guides/integrating-autoware/creating-vehicle-interface-package/vehicle-interface/#communication-between-the-vehicle-interface-and-your-vehicles-control-device","title":"Communication between the vehicle interface and your vehicle's control device","text":"
    • If you are planning to drive by autoware with your vehicle, then your vehicle must satisfy some requirements:
    Type A Type B Your vehicle can be controlled in longitudinal direction by the target velocity or acceleration. Your vehicle can be controlled in longitudinal direction by the specific target commands. (i.e., brake and gas pedal) Your vehicle can be controlled in lateral direction by the target steering angle. (Optionally, you can use steering rate as well) Your vehicle can be controlled in lateral direction by the specific target commands. (i.e., steering torque) Your vehicle must provide velocity or acceleration information and steering information which is described at the vehicle status topics above. Your vehicle must provide velocity or acceleration information and steering information which is described at the vehicle status topics above.
    • You can also use mixed Type A and Type B, for example, you want to use lateral controlling with a steering angle and longitudinal controlling with specific targets. In order to do that, you must subscribe /control/command/control_cmd for getting steering information, and you must subscribe /control/command/actuation_cmd for gas and brake pedal actuation commands. Then, you must handle these messages on your own vehicle_interface design.
    • You must adopt your vehicle low-level controller to run with autoware.
    • Your vehicle can be controlled by the target shift mode and also needs to provide Autoware with shift information.
    • If you are using CAN communication on your vehicle, please refer to ros2_socketcan package by Autoware.
    • If you are using Serial communication on your vehicle, you can look at serial-port.
    "},{"location":"how-to-guides/integrating-autoware/creating-your-autoware-repositories/creating-autoware-repositories/","title":"Creating Autoware repositories","text":""},{"location":"how-to-guides/integrating-autoware/creating-your-autoware-repositories/creating-autoware-repositories/#creating-autoware-repositories","title":"Creating Autoware repositories","text":""},{"location":"how-to-guides/integrating-autoware/creating-your-autoware-repositories/creating-autoware-repositories/#what-is-a-meta-repository","title":"What is a Meta-repository?","text":"

    A meta-repository is a repository that manages multiple repositories, and Autoware is one of them. It serves as a centralized control point for referencing, configuring, and versioning other repositories. To accomplish this, the Autoware meta-repository includes the autoware.repos file for managing multiple repositories. We will use the VCS tool (Version Control System) to handle the .repos file. VCS provides us with the capability to import, export, and pull from multiple repositories. VCS will be used to import all the necessary repositories to build Autoware into our workspace. Please refer to the documentation for VCS and .repos file usage.

    "},{"location":"how-to-guides/integrating-autoware/creating-your-autoware-repositories/creating-autoware-repositories/#how-to-create-and-customize-your-autoware-meta-repository","title":"How to create and customize your autoware meta-repository","text":""},{"location":"how-to-guides/integrating-autoware/creating-your-autoware-repositories/creating-autoware-repositories/#1-create-autoware-meta-repository","title":"1. Create autoware meta-repository","text":"

    If you want to integrate Autoware into your vehicle, the first step is to create an Autoware meta-repository.

    One easy way is to fork the Autoware repository and clone it. (For instructions on how to fork a repository, refer to GitHub Docs)

    • In this guide, Autoware will be integrated into a tutorial_vehicle (Note: when setting up multiple types of vehicles, adding a suffix like autoware.vehicle_A or autoware.vehicle_B is recommended). For the first step, please visit the autoware repository and click the fork button. The fork process should look like this:

    Sample forking demonstration for tutorial_vehicle

    Then click \"Create fork\" button to continue. After that, we can clone our fork repository on our local system.

    git clone https://github.com/YOUR_NAME/autoware.<YOUR-VEHICLE>.git\n

    For example, it should be for our documentation:

    git clone https://github.com/leo-drive/autoware.tutorial_vehicle.git\n
    "},{"location":"how-to-guides/integrating-autoware/creating-your-autoware-repositories/creating-autoware-repositories/#11-create-vehicle-individual-repositories","title":"1.1 Create vehicle individual repositories","text":"

    To integrate Autoware into your individual vehicles, you need to fork and modify the following repositories as well:

    • sample_sensor_kit: This repository will be used for sensing launch files, their pipelines-organizations and sensor descriptions. Please fork and rename as autoware meta-repository. At this point, our forked repository name will be tutorial_vehicle_sensor_kit_launch.
    • sample_vehicle_launch: This repository will be used for vehicle launch files, vehicle_descriptions and vehicle_model. Please fork and rename this repository as well. At this point, our forked repository name will be tutorial_vehicle_launch.
    • autoware_individual_params: This repository stores parameters that change depending on each vehicle (i.e. sensor calibrations). Please fork and rename this repository as well; our forked repository name will be tutorial_vehicle_individual_params.
    • autoware_launch: This repository contains node configurations and their parameters for Autoware. Please fork and rename it as the previously forked repositories; our forked repository name will be autoware_launch.tutorial_vehicle.
    "},{"location":"how-to-guides/integrating-autoware/creating-your-autoware-repositories/creating-autoware-repositories/#2-customize-autowarerepos-for-your-environment","title":"2. Customize autoware.repos for your environment","text":"

    You need to customize your autoware.repos to import your forked repositories. The autoware.repos file usually includes information for all necessary Autoware repositories (except calibration and simulator repositories). Therefore, your forked repositories should also be added to this file.

    "},{"location":"how-to-guides/integrating-autoware/creating-your-autoware-repositories/creating-autoware-repositories/#21-adding-individual-repos-to-autowarerepos","title":"2.1 Adding individual repos to autoware.repos","text":"

    After forking all repositories, you can start adding them to your Autoware meta-repository by opening the autoware.repos file using any text editor and updating sample_sensor_kit_launch, sample_vehicle_launch, autoware_individual_params and autoware launch with your own individual repos. For example, in this tutorial, the necessary changes for our forked tutorial_vehicle repositories should be as follows:

    • Sensor Kit:

      - sensor_kit/sample_sensor_kit_launch:\n-   type: git\n-   url: https://github.com/autowarefoundation/sample_sensor_kit_launch.git\n-   version: main\n+ sensor_kit/tutorial_vehicle_sensor_kit_launch:\n+   type: git\n+   url: https://github.com/leo-drive/tutorial_vehicle_sensor_kit_launch.git\n+   version: main\n
    • Vehicle Launch:

      - vehicle/sample_vehicle_launch:\n-   type: git\n-   url: https://github.com/autowarefoundation/sample_vehicle_launch.git\n-   version: main\n+ vehicle/tutorial_vehicle_launch:\n+   type: git\n+   url: https://github.com/leo-drive/tutorial_vehicle_launch.git\n+   version: main\n
    • Individual Params:

      - param/autoware_individual_params:\n-   type: git\n-   url: https://github.com/autowarefoundation/autoware_individual_params.git\n-   version: main\n+ param/tutorial_vehicle_individual_params:\n+   type: git\n+   url: https://github.com/leo-drive/tutorial_vehicle_individual_params.git\n+   version: main\n
    • Autoware Launch:

      - launcher/autoware_launch:\n-   type: git\n-   url: https://github.com/autowarefoundation/autoware_launch.git\n-   version: main\n+ launcher/autoware_launch.tutorial_vehicle:\n+   type: git\n+   url: https://github.com/leo-drive/autoware_launch.tutorial_vehicle.git\n+   version: main\n

    Please make similar changes to your own autoware.repos file. After making these changes, you will be ready to use VCS to import all the necessary repositories into your Autoware workspace.

    First, create a src directory under your own Autoware meta-repository directory:

    cd <YOUR-AUTOWARE-DIR>\nmkdir src\n

    Then, import all necessary repositories with vcs:

    cd <YOUR-AUTOWARE-DIR>\nvcs import src < autoware.repos\n

    After the running vcs import command, all autoware repositories will be cloned in the src folder under the Autoware directory.

    Now, you can build your own repository with colcon build command:

    cd <YOUR-AUTOWARE-DIR>\ncolcon build --symlink-install --cmake-args -DCMAKE_BUILD_TYPE=Release\n

    Please refer to the following documentation links for instructions on how to create and customize each of your vehicle's packages:

    • Creating vehicle and sensor models
      • Creating sensor model
      • Creating individual params
      • Creating vehicle model
    • creating-vehicle-interface-package
    • customizing-for-differential-drive-model

    Please remember to add all your custom packages, such as interfaces and descriptions, to your autoware.repos to ensure that your packages are properly included and managed within the Autoware repository.

    "},{"location":"how-to-guides/integrating-autoware/launch-autoware/","title":"Launch Autoware","text":""},{"location":"how-to-guides/integrating-autoware/launch-autoware/#launch-autoware","title":"Launch Autoware","text":"

    This section explains how to run your vehicle with Autoware. We will explain how to run and launch autoware with these modules:

    • Vehicle
    • System
    • Map
    • Sensing
    • Localization
    • Perception
    • Planning
    • Control
    "},{"location":"how-to-guides/integrating-autoware/launch-autoware/#pre-requirements-of-launching-autoware-with-real-vehicle","title":"Pre-requirements of launching Autoware with real vehicle","text":"

    Please complete these steps for integration Autoware on your vehicle:

    • Create your Autoware meta-repository.
    • Create your vehicle and sensor model.
    • Calibrate your sensors.
    • Create your Autoware compatible vehicle interface.
    • Create your environment map.

    After the completion of these steps according to your individual vehicle, you are ready to use Autoware.

    "},{"location":"how-to-guides/integrating-autoware/launch-autoware/#autoware_launch-package","title":"autoware_launch package","text":"

    The autoware_launch package starts the initiation of Autoware software stack launch files. The autoware.launch.xml launch file enables the invocation of these module launches by enabling the following launch arguments:

      <arg name=\"launch_vehicle\" default=\"true\" description=\"launch vehicle\"/>\n<arg name=\"launch_system\" default=\"true\" description=\"launch system\"/>\n<arg name=\"launch_map\" default=\"true\" description=\"launch map\"/>\n<arg name=\"launch_sensing\" default=\"true\" description=\"launch sensing\"/>\n<arg name=\"launch_sensing_driver\" default=\"true\" description=\"launch sensing driver\"/>\n<arg name=\"launch_localization\" default=\"true\" description=\"launch localization\"/>\n<arg name=\"launch_perception\" default=\"true\" description=\"launch perception\"/>\n<arg name=\"launch_planning\" default=\"true\" description=\"launch planning\"/>\n<arg name=\"launch_control\" default=\"true\" description=\"launch control\"/>\n

    For example, if you don't need to launch perception, planning, and control for localization debug, you can disable these modules like the following:

    -  <arg name=\"launch_perception\" default=\"true\" description=\"launch perception\"/>\n+  <arg name=\"launch_perception\" default=\"false\" description=\"launch perception\"/>\n-  <arg name=\"launch_planning\" default=\"true\" description=\"launch planning\"/>\n+  <arg name=\"launch_planning\" default=\"false\" description=\"launch planning\"/>\n-  <arg name=\"launch_control\" default=\"true\" description=\"launch control\"/>\n+  <arg name=\"launch_control\" default=\"false\" description=\"launch control\"/>\n

    Also, it is possible to specify which components to launch using command-line arguments.

    ros2 launch autoware_launch autoware.launch.xml vehicle_model:=YOUR_VEHICLE sensor_kit:=YOUR_SENSOR_KIT map_path:=/PATH/TO/YOUR/MAP \\\nlaunch_perception:=false \\\nlaunch_planning:=false \\\nlaunch_control:=false\n

    Also, autoware_launch package includes autoware modules parameter files under the config directory.

    <YOUR-OWN-AUTOWARE-DIR>/\n  \u2514\u2500 src/\n       \u2514\u2500 launcher/\n            \u2514\u2500 autoware_launch/\n                 \u251c\u2500 config/\n                 \u251c\u2500     \u251c\u2500 control/\n                 \u251c\u2500     \u251c\u2500 localization/\n                 \u251c\u2500     \u251c\u2500 map/\n                 \u251c\u2500     \u251c\u2500 perception/\n                 \u251c\u2500     \u251c\u2500 planning/\n                 \u251c\u2500     \u251c\u2500 simulator/\n                 \u251c\u2500     \u2514\u2500 system/\n                 \u251c\u2500launch/\n                 \u2514\u2500 rviz/\n

    So, if we change any parameter in config directory, it will override the original parameter values since autoware_launch parameter file path is used for parameter loading.

    autoware_launch package launching and parameter migrating diagram"},{"location":"how-to-guides/integrating-autoware/launch-autoware/#configure-autowarelaunchxml","title":"Configure autoware.launch.xml","text":"

    As we mentioned above, we can enable or disable Autoware modules to launch by modifying autoware.launch.xml or using command-line arguments. Also, we have some arguments for specifying our Autoware configurations. Here are some basic configurations for the autoware.launch.xml launch file: (Additionally, you can use them as command-line arguments, as we mentioned before)

    "},{"location":"how-to-guides/integrating-autoware/launch-autoware/#vehicle","title":"Vehicle","text":"

    In the Vehicle section, you can choose whether the vehicle interface will be launched or not. For example, if you disable it, then vehicle_interface.launch.xml will not be called:

    - <arg name=\"launch_vehicle_interface\" default=\"true\" description=\"launch vehicle interface\"/>\n+ <arg name=\"launch_vehicle_interface\" default=\"false\" description=\"launch vehicle interface\"/>\n

    Please be sure your vehicle interface driver included in vehicle_interface.launch.xml, for more information you can refer the creating vehicle interface page.

    "},{"location":"how-to-guides/integrating-autoware/launch-autoware/#map","title":"Map","text":"

    If your point cloud and lanelet2 map names are different from pointcloud_map.pcd and lanelet2_map.osm, you will need to update these map file name arguments:

    - <arg name=\"lanelet2_map_file\" default=\"lanelet2_map.osm\" description=\"lanelet2 map file name\"/>\n+ <arg name=\"lanelet2_map_file\" default=\"<YOUR-LANELET2-MAP-FILE-NAME>\" description=\"lanelet2 map file name\"/>\n- <arg name=\"pointcloud_map_file\" default=\"pointcloud_map.pcd\" description=\"pointcloud map file name\"/>\n+ <arg name=\"pointcloud_map_file\" default=\"<YOUR-POINTCLOUD-MAP-FILE-NAME>\" description=\"pointcloud map file name\"/>\n
    "},{"location":"how-to-guides/integrating-autoware/launch-autoware/#perception","title":"Perception","text":"

    You can define your autoware_data path here. Autoware gets yabloc_pose_initializer, image_projection_based_fusion, lidar_apollo_instance_segmentation etc. models file with autoware_data path. If you use ansible for autoware installation, the necessary artifacts will be downloaded at autoware_data folder on your $HOME directory. If you want to download artifacts manually, please check ansible artifacts page for information.

    - <arg name=\"data_path\" default=\"$(env HOME)/autoware_data\" description=\"packages data and artifacts directory path\"/>\n+ <arg name=\"data_path\" default=\"<YOUR-AUTOWARE-DATA-PATH>\" description=\"packages data and artifacts directory path\"/>\n

    Also, you can change your perception method here. The Autoware provides camera-lidar-radar fusion, camera-lidar fusion, lidar-radar fusion, lidar only and radar only perception modes. The default perception method is lidar only mode, but if you want to use camera-lidar fusion you need to change your perception mode:

    -  <arg name=\"perception_mode\" default=\"lidar\" description=\"select perception mode. camera_lidar_radar_fusion, camera_lidar_fusion, lidar_radar_fusion, lidar, radar\"/>\n+  <arg name=\"perception_mode\" default=\"camera_lidar_fusion\" description=\"select perception mode. camera_lidar_radar_fusion, camera_lidar_fusion, lidar_radar_fusion, lidar, radar\"/>\n

    If you want to use traffic light recognition and visualization, you can set traffic_light_recognition/enable_fine_detection as true (default). Please check traffic_light_fine_detector page for more information. If you don't want to use traffic light classifier, then you can disable it:

    - <arg name=\"traffic_light_recognition/enable_fine_detection\" default=\"true\" description=\"enable traffic light fine detection\"/>\n+ <arg name=\"traffic_light_recognition/enable_fine_detection\" default=\"false\" description=\"enable traffic light fine detection\"/>\n

    Please look at Launch perception page for detailed information.

    "},{"location":"how-to-guides/integrating-autoware/launch-autoware/#launch-autoware_1","title":"Launch Autoware","text":"

    Launch Autoware with the following command:

    ros2 launch autoware_launch autoware_launch.launch.xml map_path:=<YOUR-MAP-PATH> vehicle_model:=<YOUR-VEHICLE-MODEL> sensor_model:=<YOUR-SENSOR-MODEL> vehicle_id:=<YOUR-VEHICLE-ID>\n

    It is possible to specify which components to launch using command-line arguments. For example, if you don't need to launch perception, planning, and control for localization debug, you can launch the following:

    ros2 launch autoware_launch autoware_launch.launch.xml map_path:=<YOUR-MAP-PATH> vehicle_model:=<YOUR-VEHICLE-MODEL> sensor_model:=<YOUR-SENSOR-MODEL> vehicle_id:=<YOUR-VEHICLE-ID> \\\nlaunch_perception:=false \\\nlaunch_planning:=false \\\nlaunch_control:=false\n

    After launching Autoware, we need to initialize our vehicle on our map. If you set gnss_poser for your GNSS/INS sensor at gnss.launch.xml, then gnss_poser will send pose for initialization. If you don't have a GNSS sensor, then you need to set initial pose manually.

    "},{"location":"how-to-guides/integrating-autoware/launch-autoware/#set-initial-pose","title":"Set initial pose","text":"

    If not or if the automatic initialization returns an incorrect position, you need to set the initial pose using the RViz GUI.

    • Click the 2D Pose estimate button in the toolbar, or hit the P key

    2D Pose estimate with RViz
    • In the 3D View panel, click and hold the left mouse button, and then drag to set the direction for the initial pose.

    Setting 2D Pose estimate with RViz
    • After that, the vehicle will be initialized. You will then observe both your vehicle and Autoware outputs.

    Initialization of the vehicle"},{"location":"how-to-guides/integrating-autoware/launch-autoware/#set-goal-pose","title":"Set goal pose","text":"

    Set a goal pose for the ego vehicle.

    • Click the 2D Nav Goal button in the toolbar, or hit the G key

    Initialization of the vehicle
    • In the 3D View pane, click and hold the left mouse button, and then drag to set the direction for the goal pose.

    Initialization of the vehicle
    • If successful, you will see the calculated planning path on RViz.

    Planned path on RViz"},{"location":"how-to-guides/integrating-autoware/launch-autoware/#engage","title":"Engage","text":"

    There are two options for engage:

    • Firstly, you can use AutowareStatePanel, it is included in Autoware RViz configuration file, but it can be found in Panels > Add New Panel > tier4_state_rviz_plugin > AutowareStatePanel.

    Once the route is computed, the \"AUTO\" button becomes active. Pressing the AUTO button engages the autonomous driving mode.

    Autoware state panel (STOP operation mode)
    • Secondly, you can engage with ROS 2 topic. In your terminal, execute the following command.
    source ~/<YOUR-AUTOWARE-DIR>/install/setup.bash\nros2 topic pub /<YOUR-AUTOWARE-DIR>/engage autoware_auto_vehicle_msgs/msg/Engage \"engage: true\" -1\n

    Now the vehicle should drive along the calculated path!

    During the autonomous driving, the StatePanel appears as shown in the image below. Pressing the \"STOP\" button allows you to stop the vehicle.

    Autoware state panel (AUTO operation mode)"},{"location":"how-to-guides/integrating-autoware/launch-autoware/control/","title":"Control Launch Files","text":""},{"location":"how-to-guides/integrating-autoware/launch-autoware/control/#control-launch-files","title":"Control Launch Files","text":""},{"location":"how-to-guides/integrating-autoware/launch-autoware/control/#overview","title":"Overview","text":"

    The Autoware control stacks start launching at autoware_launch.xml as mentioned on the Launch Autoware page. The autoware_launch package includes tier4_control_component.launch.xml for initiating control launch files invocation from autoware_launch.xml. The diagram below illustrates the flow of Autoware control launch files within the autoware_launch and autoware.universe packages.

    Autoware control launch flow diagram

    Note

    The Autoware project is a large project. Therefore, as we manage the Autoware project, we utilize specific arguments in the launch files. ROS 2 offers an argument-overriding feature for these launch files. Please refer to the official ROS 2 launch documentation for further information. For instance, if we define an argument at the top-level launch, it will override the value on lower-level launches.

    "},{"location":"how-to-guides/integrating-autoware/launch-autoware/control/#tier4_control_componentlaunchxml","title":"tier4_control_component.launch.xml","text":"

    The tier4_control_component.launch.xml launch file is the main control component launch in the autoware_launch package. This launch file calls control.launch.xml from the tier4_control_launch package within the autoware.universe repository. We can modify control launch arguments in tier4_control_component.launch.xml. Additionally, we can add any other necessary arguments that need adjustment since tier4_control_component.launch.xml serves as the top-level launch file for other control launch files. Here are some predefined control launch arguments:

    • lateral_controller_mode: This argument determines the lateral controller algorithm. The default value is mpc. To change it to pure pursuit, make the following update in your tier4_control_component.launch.xml file:

      - <arg name=\"lateral_controller_mode\" default=\"mpc\"/>\n+ <arg name=\"lateral_controller_mode\" default=\"pure_pursuit\"/>\n
    • enable_autonomous_emergency_braking: This argument enables autonomous emergency braking under specific conditions. Please refer to the Autonomous emergency braking (AEB) page for more information. To enable it, update the value in the tier4_control_component.launch.xml file:

      - <arg name=\"enable_autonomous_emergency_braking\" default=\"false\"/>\n+ <arg name=\"enable_autonomous_emergency_braking\" default=\"true\"/>\n
    • enable_predicted_path_checker: This argument enables the predicted path checker module. Please refer to the Predicted Path Checker page for more information. To enable it, update the value in the tier4_control_component.launch.xml file:

      - <arg name=\"enable_predicted_path_checker\" default=\"false\"/>\n+ <arg name=\"enable_predicted_path_checker\" default=\"true\"/>\n

    Note

    You can also use this arguments as command line arguments:

    ros2 launch autoware_launch autoware.launch.xml ... enable_predicted_path_checker:=true lateral_controller_mode:=pure_pursuit ...\n

    The predefined arguments in tier4_control_component.launch.xml have been explained above. However, numerous control arguments are included in the autoware_launch control config parameters.

    "},{"location":"how-to-guides/integrating-autoware/launch-autoware/localization/","title":"Localization Launch Files","text":""},{"location":"how-to-guides/integrating-autoware/launch-autoware/localization/#localization-launch-files","title":"Localization Launch Files","text":""},{"location":"how-to-guides/integrating-autoware/launch-autoware/localization/#overview","title":"Overview","text":"

    The Autoware localization stacks start launching at autoware_launch.xml as we mentioned at Launch Autoware page. The autoware_launch package includes tier4_localization_component.launch.xml for starting localization launch files invocation from autoware_launch.xml. This diagram describes some of the Autoware localization launch files flow at autoware_launch and autoware.universe packages.

    Autoware localization launch flow diagram

    Note

    The Autoware project is a large project. Therefore, as we manage the Autoware project, we utilize specific arguments in the launch files. ROS 2 offers an argument-overriding feature for these launch files. Please refer to the official ROS 2 launch documentation for further information. For instance, if we define an argument at the top-level launch, it will override the value on lower-level launches.

    "},{"location":"how-to-guides/integrating-autoware/launch-autoware/localization/#tier4_localization_componentlaunchxml","title":"tier4_localization_component.launch.xml","text":"

    The tier4_localization_component.launch.xml launch file is the main localization component launch at the autoware_launch package. This launch file calls localization.launch.xml at tier4_localization_launch package from autoware.universe repository. We can modify localization launch arguments at tier4_localization_component.launch.xml.

    The current localization launcher implemented by TIER IV supports multiple localization methods, both pose estimators and twist estimators. tier4_localization_component.launch.xml has two arguments to select which estimators to launch:

    • pose_source: This argument specifies the pose_estimator, currently supporting ndt (default), yabloc, artag and eagleye for localization. By default, Autoware launches ndt_scan_matcher for pose estimator. You can use YabLoc as a camera-based localization method. For more details on YabLoc, please refer to the README of YabLoc in autoware.universe. Also, you can use Eagleye as a GNSS & IMU & wheel odometry-based localization method. For more details on Eagleye, please refer to the Eagleye.

      You can set pose_source argument on tier4_localization_component.launch.xml, for example, if you want to use eagleye as pose_source, you need to update tier4_localization_component.launch.xml like:

      - <arg name=\"pose_source\" default=\"ndt\" description=\"select pose_estimator: ndt, yabloc, eagleye\"/>\n+ <arg name=\"pose_source\" default=\"eagleye\" description=\"select pose_estimator: ndt, yabloc, eagleye\"/>\n

      Also, you can use command-line for overriding launch arguments:

      ros2 launch autoware_launch autoware.launch.xml ... pose_source:=eagleye\n
    • twist_source: This argument specifies the twist_estimator, currently supporting gyro_odom (default), and eagleye. By default, Autoware launches gyro_odometer for twist estimator. Also, you can use eagleye for the twist source, please refer to the Eagleye. If you want to change your twist source to eagleye, you can update tier4_localization_component.launch.xml like:

      - <arg name=\"twist_source\" default=\"gyro_odom\" description=\"select twist_estimator. gyro_odom, eagleye\"/>\n+ <arg name=\"twist_source\" default=\"eagleye\" description=\"select twist_estimator. gyro_odom, eagleye\"/>\n

      Or you can use command-line for overriding launch arguments:

      ros2 launch autoware_launch autoware.launch.xml ... twist_source:=eagleye\n
    • input_pointcloud: This argument specifies the input pointcloud of the localization pointcloud pipeline. The default value is /sensing/lidar/top/outlier_filtered/pointcloud which is output of the pointcloud pre-processing pipeline from sensing. You can change this value according to your LiDAR topic name, or you can choose to use concatenated point cloud:

      - <arg name=\"input_pointcloud\" default=\"/sensing/lidar/top/outlier_filtered/pointcloud\" description=\"The topic will be used in the localization util module\"/>\n+ <arg name=\"input_pointcloud\" default=\"/sensing/lidar/concatenated/pointcloud\"/>\n

    You can add every necessary argument to tier4_localization_component.launch.xml launch file like these examples. In case, if you want to change your gyro odometer twist input topic, you can add this argument on tier4_localization_component.launch.xml launch file:

    + <arg name=\"input_vehicle_twist_with_covariance_topic\" value=\"<YOUR-VEHICLE-TWIST-TOPIC-NAME>\"/>\n

    Note: Gyro odometer input topic provided from velocity converter package. This package will be launched at sensor_kit. For more information, please check velocity converter package.

    "},{"location":"how-to-guides/integrating-autoware/launch-autoware/localization/#note-when-using-non-ndt-pose-estimator","title":"Note when using non NDT pose estimator","text":"

    Note

    Since NDT is currently the most often used pose_estimator , the NDT diagnostics are registered for monitoring by default. When using a pose_estimator other than NDT, NDT diagnostics will always be marked as stale, causing the system to enter a safe_fault state. Depending on the parameters of emergencies, this could escalate to a full emergency, preventing autonomous driving.

    To work around this, please modify the configuration file of the system_error_monitor. In the system_error_monitor.param.yaml file, /autoware/localization/performance_monitoring/matching_score represents the aggregated diagnostics for NDT. To prevent emergencies even when NDT is not launched, remove this entry from the configuration. Note that the module name /performance_monitoring/matching_score is specified in diagnostics_aggregator/localization.param.yaml.

    "},{"location":"how-to-guides/integrating-autoware/launch-autoware/localization/eagleye/","title":"Using Eagleye with Autoware","text":""},{"location":"how-to-guides/integrating-autoware/launch-autoware/localization/eagleye/#using-eagleye-with-autoware","title":"Using Eagleye with Autoware","text":"

    This page will show you how to set up Eagleye in order to use it with Autoware. For the details of the integration proposal, please refer to this discussion.

    "},{"location":"how-to-guides/integrating-autoware/launch-autoware/localization/eagleye/#what-is-eagleye","title":"What is Eagleye?","text":"

    Eagleye is an open-source GNSS/IMU-based localizer initially developed by MAP IV. Inc. It provides a cost-effective alternative to LiDAR and point cloud-based localization by using low-cost GNSS and IMU sensors to provide vehicle position, orientation, and altitude information.

    "},{"location":"how-to-guides/integrating-autoware/launch-autoware/localization/eagleye/#dependencies","title":"Dependencies","text":"

    The below packages are automatically installed during the setup of Autoware as they are listed in autoware.repos.

    1. Eagleye (autoware-main branch)
    2. RTKLIB ROS Bridge (ros2-v0.1.0 branch)
    3. LLH Converter (ros2 branch)
    "},{"location":"how-to-guides/integrating-autoware/launch-autoware/localization/eagleye/#architecture","title":"Architecture","text":"

    Eagleye can be utilized in the Autoware localization stack in two ways:

    1. Feed only twist into the EKF localizer.

    2. Feed both twist and pose from Eagleye into the EKF localizer (twist can also be used with regular gyro_odometry).

    Note: RTK positioning is required when using Eagleye as the pose estimator. On the other hand, it is not mandatory when using it as the twist estimator.

    "},{"location":"how-to-guides/integrating-autoware/launch-autoware/localization/eagleye/#requirements","title":"Requirements","text":"

    Eagleye requires GNSS, IMU and vehicle speed as inputs.

    "},{"location":"how-to-guides/integrating-autoware/launch-autoware/localization/eagleye/#imu-topic","title":"IMU topic","text":"

    sensor_msgs/msg/Imu is supported for Eagleye IMU input.

    "},{"location":"how-to-guides/integrating-autoware/launch-autoware/localization/eagleye/#vehicle-speed-topic","title":"Vehicle speed topic","text":"

    geometry_msgs/msg/TwistStamped and geometry_msgs/msg/TwistWithCovarianceStamped are supported for the input vehicle speed.

    "},{"location":"how-to-guides/integrating-autoware/launch-autoware/localization/eagleye/#gnss-topic","title":"GNSS topic","text":"

    Eagleye requires latitude/longitude height and doppler velocity generated by the GNSS receiver. Your GNSS ROS driver must publish the following messages:

    • sensor_msgs/msg/NavSatFix: This message contains latitude, longitude, and height information.
    • geometry_msgs/msg/TwistWithCovarianceStamped: This message contains gnss doppler velocity information.

      Eagleye has been tested with the following example GNSS ROS drivers: ublox_gps and septentrio_gnss_driver. The settings needed for each of these drivers are as follows:

    GNSS ROS drivers modification ublox_gps No additional settings are required. It publishes sensor_msgs/msg/NavSatFix and geometry_msgs/msg/TwistWithCovarianceStamped required by Eagleye with default settings. septentrio_gnss_driver Set publish.navsatfix and publish.twist in the config file gnss.yaml to true"},{"location":"how-to-guides/integrating-autoware/launch-autoware/localization/eagleye/#parameter-modifications-for-integration-into-your-vehicle","title":"Parameter Modifications for Integration into Your Vehicle","text":""},{"location":"how-to-guides/integrating-autoware/launch-autoware/localization/eagleye/#topic-name-topic-type","title":"topic name & topic type","text":"

    The users must correctly specify input topics for GNSS latitude, longitude, and height , GNSS doppler speed , IMU , and vehicle speed in the eagleye_config.yaml.

    # Topic\ntwist:\ntwist_type: 1 # TwistStamped : 0, TwistWithCovarianceStamped: 1\ntwist_topic: /sensing/vehicle_velocity_converter/twist_with_covariance\nimu_topic: /sensing/imu/tamagawa/imu_raw\ngnss:\nvelocity_source_type: 2 # rtklib_msgs/RtklibNav: 0, nmea_msgs/Sentence: 1, ublox_msgs/NavPVT: 2, geometry_msgs/TwistWithCovarianceStamped: 3\nvelocity_source_topic: /sensing/gnss/ublox/navpvt\nllh_source_type: 2 # rtklib_msgs/RtklibNav: 0, nmea_msgs/Sentence: 1, sensor_msgs/NavSatFix: 2\nllh_source_topic: /sensing/gnss/ublox/nav_sat_fix\n
    "},{"location":"how-to-guides/integrating-autoware/launch-autoware/localization/eagleye/#sensor-frequency","title":"sensor frequency","text":"

    Also, the frequency of GNSS and IMU must be set in eagleye_config.yaml

    common:\nimu_rate: 50\ngnss_rate: 5\n
    "},{"location":"how-to-guides/integrating-autoware/launch-autoware/localization/eagleye/#conversion-from-fix-to-pose","title":"Conversion from fix to pose","text":"

    The parameters for converting sensor_msgs/msg/NavSatFix to geometry_msgs/msg/PoseWithCovarianceStamped is listed in geo_pose_converter.launch.xml. If you use a different geoid or projection type, change these parameters.

    "},{"location":"how-to-guides/integrating-autoware/launch-autoware/localization/eagleye/#other-parameters","title":"Other parameters","text":"

    The other parameters are described here. Basically, these do not need to be changed .

    "},{"location":"how-to-guides/integrating-autoware/launch-autoware/localization/eagleye/#notes-on-initialization","title":"Notes on initialization","text":"

    Eagleye requires an initialization process for proper operation. Without initialization, the output for twist will be in the raw value, and the pose data will not be available.

    "},{"location":"how-to-guides/integrating-autoware/launch-autoware/localization/eagleye/#1-static-initialization","title":"1. Static Initialization","text":"

    The first step is static initialization, which involves allowing the Eagleye to remain stationary for approximately 5 seconds after startup to estimate the yaw-rate offset.

    "},{"location":"how-to-guides/integrating-autoware/launch-autoware/localization/eagleye/#2-dynamic-initialization","title":"2. Dynamic initialization","text":"

    The next step is dynamic initialization, which involves running the Eagleye in a straight line for approximately 30 seconds. This process estimates the scale factor of wheel speed and azimuth angle.

    Once dynamic initialization is complete, the Eagleye will be able to provide corrected twist and pose data.

    "},{"location":"how-to-guides/integrating-autoware/launch-autoware/localization/eagleye/#how-to-check-the-progress-of-initialization","title":"How to check the progress of initialization","text":"
    • TODO
    "},{"location":"how-to-guides/integrating-autoware/launch-autoware/localization/eagleye/#note-on-georeferenced-maps","title":"Note on georeferenced maps","text":"

    Note that the output position might not appear to be in the point cloud maps if you are using maps that are not properly georeferenced. In the case of a single GNSS antenna, initial position estimation (dynamic initialization) can take several seconds to complete after starting to run in an environment where GNSS positioning is available.

    "},{"location":"how-to-guides/integrating-autoware/launch-autoware/map/","title":"Map Launch Files","text":""},{"location":"how-to-guides/integrating-autoware/launch-autoware/map/#map-launch-files","title":"Map Launch Files","text":""},{"location":"how-to-guides/integrating-autoware/launch-autoware/map/#overview","title":"Overview","text":"

    The Autoware map stacks start launching at autoware_launch.xml as we mentioned at Launch Autoware page. The autoware_launch package includes tier4_map_component.launch.xml for starting map launch files invocation from autoware_launch.xml. This diagram describes some of the Autoware map launch files flow at autoware_launch and autoware.universe packages.

    Autoware map launch flow diagram

    Note

    The Autoware project is a large project. Therefore, as we manage the Autoware project, we utilize specific arguments in the launch files. ROS 2 offers an argument-overriding feature for these launch files. Please refer to the official ROS 2 launch documentation for further information. For instance, if we define an argument at the top-level launch, it will override the value on lower-level launches.

    The map.launch.py launch file from the tier4_map_launch package directly includes the necessary node definitions for mapping. In the current design of Autoware, the lanelet2_map_loader, lanelet2_map_visualization, pointcloud_map_loader, and vector_map_tf_generator composable nodes are included in the map_container.

    We don't have many modification options in the map launching files (as the parameters are included in the config files). However, you can specify the names for your pointcloud and lanelet2 map during the launch (the default values are pointcloud_map.pcd and lanelet2_map.osm). For instance, if you wish to change your map file names, you can run Autoware using the following command line arguments:

    ros2 launch autoware_launch autoware.launch.xml ... pointcloud_map_file:=<YOUR-PCD-FILE-NAME> lanelet2_map_file:=<YOUR-LANELET2-MAP-NAME> ...\n

    Or you can change it on your autoware.launch.xml launch file:

    - <arg name=\"lanelet2_map_file\" default=\"lanelet2_map.osm\" description=\"lanelet2 map file name\"/>\n+ <arg name=\"lanelet2_map_file\" default=\"<YOUR-LANELET2-MAP-NAME>\" description=\"lanelet2 map file name\"/>\n- <arg name=\"pointcloud_map_file\" default=\"pointcloud_map.pcd\" description=\"pointcloud map file name\"/>\n+ <arg name=\"pointcloud_map_file\" default=\"<YOUR-PCD-FILE-NAME>\" description=\"pointcloud map file name\"/>\n
    "},{"location":"how-to-guides/integrating-autoware/launch-autoware/perception/","title":"Perception Launch Files","text":""},{"location":"how-to-guides/integrating-autoware/launch-autoware/perception/#perception-launch-files","title":"Perception Launch Files","text":""},{"location":"how-to-guides/integrating-autoware/launch-autoware/perception/#overview","title":"Overview","text":"

    The Autoware perception stacks start launching at autoware_launch.xml as we mentioned at Launch Autoware page. The autoware_launch package includes tier4_perception_component.launch.xml for starting perception launch files invocation from autoware_launch.xml. This diagram describes some of the Autoware perception launch files flow at autoware_launch and autoware.universe packages.

    Autoware perception launch flow diagram

    Note

    The Autoware project is a large project. Therefore, as we manage the Autoware project, we utilize specific arguments in the launch files. ROS 2 offers an argument-overriding feature for these launch files. Please refer to the official ROS 2 launch documentation for further information. For instance, if we define an argument at the top-level launch, it will override the value on lower-level launches.

    "},{"location":"how-to-guides/integrating-autoware/launch-autoware/perception/#tier4_perception_componentlaunchxml","title":"tier4_perception_component.launch.xml","text":"

    The tier4_perception_component.launch.xml launch file is the main perception component launch at the autoware_launch package. This launch file calls perception.launch.xml at tier4_perception_launch package from autoware.universe repository. We can modify perception launch arguments at tier4_perception_component.launch.xml. Also, we can add any other necessary arguments that we want to change it since tier4_perception_component.launch.xml is the top-level launch file of other perception launch files. Here are some predefined perception launch arguments:

    • occupancy_grid_map_method: This argument determines the occupancy grid map method for perception stack. Please check probabilistic_occupancy_grid_map package for detailed information. The default probabilistic occupancy grid map method is pointcloud_based_occupancy_grid_map. If you want to change it to the laserscan_based_occupancy_grid_map, you can change it here:

      - <arg name=\"occupancy_grid_map_method\" default=\"pointcloud_based_occupancy_grid_map\" description=\"options: pointcloud_based_occupancy_grid_map, laserscan_based_occupancy_grid_map\"/>\n+ <arg name=\"occupancy_grid_map_method\" default=\"laserscan_based_occupancy_grid_map\" description=\"options: pointcloud_based_occupancy_grid_map, laserscan_based_occupancy_grid_map\"/>\n
    • detected_objects_filter_method: This argument determines the filter method for detected objects. Please check detected_object_validation package for detailed information about lanelet and position filter. The default detected object filter method is lanelet_filter. If you want to change it to the position_filter, you can change it here:

      - <arg name=\"detected_objects_filter_method\" default=\"lanelet_filter\" description=\"options: lanelet_filter, position_filter\"/>\n+ <arg name=\"detected_objects_filter_method\" default=\"position_filter\" description=\"options: lanelet_filter, position_filter\"/>\n
    • detected_objects_validation_method: This argument determines the validation method for detected objects. Please check detected_object_validation package for detailed information about validation methods. The default detected object filter method is obstacle_pointcloud. If you want to change it to the occupancy_grid, you can change it here, but remember it requires laserscan_based_occupancy_grid_map method as occupancy_grid_map_method:

      - <arg name=\"occupancy_grid_map_method\" default=\"pointcloud_based_occupancy_grid_map\" description=\"options: pointcloud_based_occupancy_grid_map, laserscan_based_occupancy_grid_map\"/>\n+ <arg name=\"occupancy_grid_map_method\" default=\"laserscan_based_occupancy_grid_map\" description=\"options: pointcloud_based_occupancy_grid_map, laserscan_based_occupancy_grid_map\"/>\n <arg\n  name=\"detected_objects_validation_method\"\n- default=\"obstacle_pointcloud\"\n+ default=\"occupancy_grid\"\n description=\"options: obstacle_pointcloud, occupancy_grid (occupancy_grid_map_method must be laserscan_based_occupancy_grid_map)\"\n  />\n

    Note

    You can also use this arguments as command line arguments:

    ros2 launch autoware_launch autoware.launch.xml ... detected_objects_filter_method:=lanelet_filter occupancy_grid_map_method:=laserscan_based_occupancy_grid_map ...\n

    The predefined tier4_perception_component.launch.xml arguments explained above, but there is the lot of perception arguments included in perception.launch.xml launch file at tier4_perception_launch. Since we didn't fork autoware.universe repository, we can add the necessary launch argument to tier4_perception_component.launch.xml file. Please follow the guidelines for some examples.

    "},{"location":"how-to-guides/integrating-autoware/launch-autoware/perception/#perceptionlaunchxml","title":"perception.launch.xml","text":"

    The perception.launch.xml launch file is the main perception launch at the autoware.universe. This launch file calls necessary perception launch files as we mentioned Autoware perception launch flow diagram above. The top-level launch file of perception.launch.xml is tier4_perception_component.launch.xml, so if we want to change anything on perception.launch.xml, we will apply these changes tier4_perception_component.launch.xml instead of perception.launch.xml.

    Here are some example changes for the perception pipeline:

    • remove_unknown: This parameter determines the remove unknown objects at camera-lidar fusion. Please check roi_cluster_fusion node for detailed information. The default value is true. If you want to change it to the false, you can add this argument to tier4_perception_component.launch.xml, so it will override the perception.launch.xml's argument:

      + <arg name=\"remove_unknown\" default=\"false\"/>\n
    • camera topics: If you are using camera-lidar fusion or camera-lidar-radar fusion as a perception_mode, you can add your camera and info topics on tier4_perception_component.launch.xml as well, it will override the perception.launch.xml launch file arguments:

      + <arg name=\"image_raw0\" default=\"/sensing/camera/camera0/image_rect_color\" description=\"image raw topic name\"/>\n+ <arg name=\"camera_info0\" default=\"/sensing/camera/camera0/camera_info\" description=\"camera info topic name\"/>\n+ <arg name=\"detection_rois0\" default=\"/perception/object_recognition/detection/rois0\" description=\"detection rois output topic name\"/>\n...\n

    You can add every necessary argument to tier4_perception_component.launch.xml launch file like these examples.

    "},{"location":"how-to-guides/integrating-autoware/launch-autoware/planning/","title":"Planning Launch Files","text":""},{"location":"how-to-guides/integrating-autoware/launch-autoware/planning/#planning-launch-files","title":"Planning Launch Files","text":""},{"location":"how-to-guides/integrating-autoware/launch-autoware/planning/#overview","title":"Overview","text":"

    The Autoware planning stacks start launching at autoware_launch.xml as mentioned on the Launch Autoware page. The autoware_launch package includes tier4_planning_component.launch.xml for initiating planning launch files invocation from autoware_launch.xml. The diagram below illustrates the flow of Autoware planning launch files within the autoware_launch and autoware.universe packages.

    Autoware planning launch flow diagram

    Note

    The Autoware project is a large project. Therefore, as we manage the Autoware project, we utilize specific arguments in the launch files. ROS 2 offers an argument-overriding feature for these launch files. Please refer to the official ROS 2 launch documentation for further information. For instance, if we define an argument at the top-level launch, it will override the value on lower-level launches.

    "},{"location":"how-to-guides/integrating-autoware/launch-autoware/planning/#tier4_planning_componentlaunchxml","title":"tier4_planning_component.launch.xml","text":"

    The tier4_planning_component.launch.xml launch file is the main planning component launch at the autoware_launch package. This launch file calls planning.launch.xml at tier4_planning_launch package from autoware.universe repository. We can modify planning launch arguments at tier4_planning_component.launch.xml. Also, we can add any other necessary arguments that we want to change it since tier4_planning_component.launch.xml is the top-level launch file of other planning launch files. Here are some predefined planning launch arguments:

    • use_experimental_lane_change_function: This argument enables enable_collision_check_at_prepare_phase, use_predicted_path_outside_lanelet, and use_all_predicted_path options for Autoware for experimental lane changing (for more information, please refer to lane_change documentation). The default value is True. To set it to False, make the following change in the tier4_planning_component.launch.xml file:

      - <arg name=\"use_experimental_lane_change_function\" default=\"true\"/>\n+ <arg name=\"use_experimental_lane_change_function\" default=\"false\"/>\n
    • cruise_planner_type: There are two types of cruise planners in Autoware: obstacle_stop_planner and obstacle_cruise_planner. For specifications on these cruise planner types, please refer to the package documentation. The default cruise planner is obstacle_stop_planner. To change it to obstacle_cruise_planner, update the argument value in the tier4_planning_component.launch.xml file:

      - <arg name=\"cruise_planner_type\" default=\"obstacle_stop_planner\" description=\"options: obstacle_stop_planner, obstacle_cruise_planner, none\"/>\n+ <arg name=\"cruise_planner_type\" default=\"obstacle_cruise_planner\" description=\"options: obstacle_stop_planner, obstacle_cruise_planner, none\"/>\n
    • use_surround_obstacle_check: This argument enables the surround_obstacle_checker for Autoware. If you want to disable it, you can do in the tier4_planning_component.launch.xml file:

      - <arg name=\"use_surround_obstacle_check\" default=\"true\"/>\n+ <arg name=\"use_surround_obstacle_check\" default=\"false\"/>\n
    • velocity_smoother_type: This argument specifies the type of smoother for the motion_velocity_smoother package. Please consult the documentation for detailed information about available smoother types. For instance, if you wish to change your smoother type from JerkFiltered to L2, you can do in the tier4_planning_component.launch.xml file.:

      - <arg name=\"velocity_smoother_type\" default=\"JerkFiltered\" description=\"options: JerkFiltered, L2, Analytical, Linf(Unstable)\"/>\n+ <arg name=\"velocity_smoother_type\" default=\"L2\" description=\"options: JerkFiltered, L2, Analytical, Linf(Unstable)\"/>\n

    Note

    You can also use this arguments as command line arguments:

    ros2 launch autoware_launch autoware.launch.xml ... use_surround_obstacle_check:=false velocity_smoother_type:=L2 ...\n

    The predefined arguments in tier4_planning_component.launch.xml have been explained above. However, numerous planning arguments are included in the autoware_launch planning config parameters.

    "},{"location":"how-to-guides/integrating-autoware/launch-autoware/sensing/","title":"Sensing Launch Files","text":""},{"location":"how-to-guides/integrating-autoware/launch-autoware/sensing/#sensing-launch-files","title":"Sensing Launch Files","text":""},{"location":"how-to-guides/integrating-autoware/launch-autoware/sensing/#overview","title":"Overview","text":"

    The Autoware sensing stacks start launching at autoware_launch.xml as we mentioned at Launch Autoware page. The autoware_launch package includes tier4_sensing_component.launch.xml for starting sensing launch files invocation from autoware_launch.xml. This diagram describes some of the Autoware sensing launch files flow at autoware_launch and autoware.universe packages.

    Autoware sensing launch flow diagram

    Note

    The Autoware project is a large project. Therefore, as we manage the Autoware project, we utilize specific arguments in the launch files. ROS 2 offers an argument-overriding feature for these launch files. Please refer to the official ROS 2 launch documentation for further information. For instance, if we define an argument at the top-level launch, it will override the value on lower-level launches.

    The sensing launch is more related to your sensor kit, so if you want to modify your launch, we recommend applying these modifications to the packages. Please look at creating sensor and vehicle model pages for more information but there is are some modifications on which can you done at top-level launch files.

    For example, if you do not want to launch the sensor driver with Autoware, you can disable it with a command-line argument:

    ros2 launch autoware_launch autoware.launch.xml ... launch_sensing_driver:=false ...\n

    Or you can change it on your autoware.launch.xml launch file:

    - <arg name=\"launch_sensing_driver\" default=\"true\" description=\"launch sensing driver\"/>\n+ <arg name=\"launch_sensing_driver\" default=\"false\" description=\"launch sensing driver\"/>\n
    "},{"location":"how-to-guides/integrating-autoware/launch-autoware/system/","title":"System Launch Files","text":""},{"location":"how-to-guides/integrating-autoware/launch-autoware/system/#system-launch-files","title":"System Launch Files","text":""},{"location":"how-to-guides/integrating-autoware/launch-autoware/system/#overview","title":"Overview","text":"

    The Autoware system stacks start launching at autoware_launch.xml as we mentioned at Launch Autoware page. The autoware_launch package includes tier4_system_component.launch.xml for starting system launch files invocation from autoware_launch.xml. This diagram describes some of the Autoware system launch files flow at autoware_launch and autoware.universe packages.

    Autoware system launch flow diagram

    Note

    The Autoware project is a large project. Therefore, as we manage the Autoware project, we utilize specific arguments in the launch files. ROS 2 offers an argument-overriding feature for these launch files. Please refer to the official ROS 2 launch documentation for further information. For instance, if we define an argument at the top-level launch, it will override the value on lower-level launches.

    As described in the flow diagram of the system launch pipeline, the system.launch.xml from the tier4_system_launch package directly launches the following packages:

    • system_monitor
    • component_interface_tools
    • component_state_monitor
    • system_error_monitor
    • emergency_handler
    • duplicated_node_checker
    • mrm_comfortable_stop_operator
    • mrm_emergency_stop_operator
    • dummy_diag_publisher

    We don't have many modification options in the system launching files (as the parameters are included in config files), but you can modify the file path for the system_error_monitor parameter. For instance, if you want to change the path for your system_error_monitor.param.yaml file, you can run Autoware with the following command line argument:

    ros2 launch autoware_launch autoware.launch.xml ... system_error_monitor_param_path:=<YOUR-SYSTEM-ERROR-PARAM-PATH> ...\n

    Or you can change it on your tier4_system_component.launch.xml launch file:

    - <arg name=\"system_error_monitor_param_path\" default=\"$(find-pkg-share autoware_launch)/config/...\"/>\n+ <arg name=\"system_error_monitor_param_path\" default=\"<YOUR-SYSTEM-ERROR-PARAM-PATH>\"/>\n
    "},{"location":"how-to-guides/integrating-autoware/launch-autoware/vehicle/","title":"Vehicle Launch Files","text":""},{"location":"how-to-guides/integrating-autoware/launch-autoware/vehicle/#vehicle-launch-files","title":"Vehicle Launch Files","text":""},{"location":"how-to-guides/integrating-autoware/launch-autoware/vehicle/#overview","title":"Overview","text":"

    The Autoware vehicle stacks begin launching with autoware_launch.xml as mentioned on the Launch Autoware page, and the tier4_vehicle.launch.xml is called in this context. The following diagram describes some of the Autoware vehicle launch file flow within the autoware_launch and autoware.universe packages.

    Autoware vehicle launch flow diagram

    Note

    The Autoware project is a large project. Therefore, as we manage the Autoware project, we utilize specific arguments in the launch files. ROS 2 offers an argument-overriding feature for these launch files. Please refer to the official ROS 2 launch documentation for further information. For instance, if we define an argument at the top-level launch, it will override the value on lower-level launches.

    We don't have many modification options in the vehicle launching files (as the parameters are included in the vehicle_launch repository), but you can choose to disable the vehicle_interface launch. For instance, if you want to run robot_state_publisher but not vehicle_interface, you can launch Autoware with the following command line arguments:

    ros2 launch autoware_launch autoware.launch.xml ... launch_vehicle_interface:=false ...\n

    Or you can change it on your autoware.launch.xml launch file:

    - <arg name=\"launch_vehicle_interface\" default=\"true\" description=\"launch vehicle interface\"/>\n+ <arg name=\"launch_vehicle_interface\" default=\"false\" description=\"launch vehicle interface\"/>\n
    "},{"location":"how-to-guides/integrating-autoware/tuning-parameters-and-performance/evaluating-controller-performance/","title":"Evaluating the controller performance","text":""},{"location":"how-to-guides/integrating-autoware/tuning-parameters-and-performance/evaluating-controller-performance/#evaluating-the-controller-performance","title":"Evaluating the controller performance","text":"

    This page shows how to use control_performance_analysis package to evaluate the controllers.

    control_performance_analysis is the package to analyze the tracking performance of a control module and monitor the driving status of the vehicle.

    If you need more detailed information about package, refer to the control_performance_analysis.

    "},{"location":"how-to-guides/integrating-autoware/tuning-parameters-and-performance/evaluating-controller-performance/#how-to-use","title":"How to use","text":""},{"location":"how-to-guides/integrating-autoware/tuning-parameters-and-performance/evaluating-controller-performance/#before-driving","title":"Before Driving","text":""},{"location":"how-to-guides/integrating-autoware/tuning-parameters-and-performance/evaluating-controller-performance/#1-firstly-you-need-to-launch-autoware-you-can-also-use-this-tool-with-real-vehicle-driving","title":"1. Firstly you need to launch Autoware. You can also use this tool with real vehicle driving","text":""},{"location":"how-to-guides/integrating-autoware/tuning-parameters-and-performance/evaluating-controller-performance/#2-initialize-the-vehicle-and-send-goal-position-to-create-route","title":"2. Initialize the vehicle and send goal position to create route","text":"
    • If you have any problem with launching Autoware, please see the tutorials page.
    "},{"location":"how-to-guides/integrating-autoware/tuning-parameters-and-performance/evaluating-controller-performance/#3-launch-the-control_performance_analysis-package","title":"3. Launch the control_performance_analysis package","text":"
    ros2 launch control_performance_analysis control_performance_analysis.launch.xml\n
    • After this command, you should be able to see the driving monitor and error variables in topics.
    "},{"location":"how-to-guides/integrating-autoware/tuning-parameters-and-performance/evaluating-controller-performance/#4-run-the-plotjuggler-in-sourced-terminal","title":"4. Run the PlotJuggler in sourced terminal","text":"
    source ~/autoware/install/setup.bash\n
    ros2 run plotjuggler plotjuggler\n
    • If you do not have PlotJuggler in your computer, please refer here for installation guideline.
    "},{"location":"how-to-guides/integrating-autoware/tuning-parameters-and-performance/evaluating-controller-performance/#5-increase-the-buffer-size-maximum-is-100-and-import-the-layout-from-autowareuniversecontrolcontrol_performance_analysisconfigcontroller_monitorxml","title":"5. Increase the buffer size (maximum is 100), and import the layout from /autoware.universe/control/control_performance_analysis/config/controller_monitor.xml","text":"
    • After import the layout, please specify the topics that are listed below.
    • /localization/kinematic_state
    • /vehicle/status/steering_status
    • /control_performance/driving_status
    • /control_performance/performance_vars
    • Please mark the If present, use the timestamp in the field [header.stamp] box, then select the OK.
    "},{"location":"how-to-guides/integrating-autoware/tuning-parameters-and-performance/evaluating-controller-performance/#6-now-you-can-start-to-driving-you-should-see-all-the-performance-and-driving-variables-in-plotjuggler","title":"6. Now, you can start to driving. You should see all the performance and driving variables in PlotJuggler","text":""},{"location":"how-to-guides/integrating-autoware/tuning-parameters-and-performance/evaluating-controller-performance/#after-driving","title":"After Driving","text":""},{"location":"how-to-guides/integrating-autoware/tuning-parameters-and-performance/evaluating-controller-performance/#1-you-can-export-the-statistical-output-and-all-data-to-compare-and-later-usage","title":"1. You can export the statistical output and all data to compare and later usage","text":"
    • With statistical data, you can export the all statistical values like (min, max, average) to compare the controllers.
    • You can also export all data to later use. To investigate them again, after launch PlotJuggler, import the .cvs file from data section.
    "},{"location":"how-to-guides/integrating-autoware/tuning-parameters-and-performance/evaluating-controller-performance/#tips","title":"Tips","text":"
    • You can plot the vehicle position. Select the two curve (keeping CTRL key pressed) and Drag & Drop them using the RIGHT Mouse button. Please visit the Help -> Cheatsheet in PlotJuggler to see more tips about it.
    • If you see too much noised curve in plots, you can adjust the odom_interval and low_pass_filter_gain from here to avoid noised data.
    "},{"location":"how-to-guides/integrating-autoware/tuning-parameters-and-performance/evaluating-real-time-performance/","title":"Evaluating real-time performance","text":""},{"location":"how-to-guides/integrating-autoware/tuning-parameters-and-performance/evaluating-real-time-performance/#evaluating-real-time-performance","title":"Evaluating real-time performance","text":""},{"location":"how-to-guides/integrating-autoware/tuning-parameters-and-performance/evaluating-real-time-performance/#introduction","title":"Introduction","text":"

    Autoware should be real-time system when integrated to a service. Therefore, the response time of each callback should be as small as possible. If Autoware appears to be slow, it is imperative to conduct performance measurements and implement improvements based on the analysis. However, Autoware is a complex software system comprising numerous ROS 2 nodes, potentially complicating the process of identifying bottlenecks. To address this challenge, we will discuss methods for conducting detailed performance measurements for Autoware and provide case studies. It is worth noting that multiple factors can contribute to poor performance, such as scheduling and memory allocation in the OS layer, but our focus in this page will be on user code bottlenecks. The outline of this section is as follows:

    • Performance measurement
      • Single node execution
      • Prepare separated cores
      • Run single node separately
      • Measurement and visualization
    • Case studies
      • Sensing component
      • Planning component
    "},{"location":"how-to-guides/integrating-autoware/tuning-parameters-and-performance/evaluating-real-time-performance/#performance-measurement","title":"Performance measurement","text":"

    Improvement is impossible without precise measurements. To measure the performance of the application code, it is essential to eliminate any external influences. Such influences include interference from the operating system and CPU frequency fluctuations. Scheduling effects also occur when core resources are shared by multiple threads. This section outlines a technique for accurately measuring the performance of the application code for a specific node. Though this section only discusses the case of Linux on Intel CPUs, similar considerations should be made in other environments.

    "},{"location":"how-to-guides/integrating-autoware/tuning-parameters-and-performance/evaluating-real-time-performance/#single-node-execution","title":"Single node execution","text":"

    To eliminate the influence of scheduling, the node being measured should operate independently, using the same logic as when the entire Autoware system is running. To accomplish this, record all input topics of the node to be measured while the whole Autoware system is running. To achieve this objective, a tool called ros2_single_node_replayer has been prepared.

    Details on how to use the tool can be found in the README. This tool records the input topics of a specific node during the entire Autoware operation and replays it in a single node with the same logic. The tool relies on the ros2 bag record command, and the recording of service/action is not supported as of ROS 2 Humble, so nodes that use service/action as their main logic may not work well.

    "},{"location":"how-to-guides/integrating-autoware/tuning-parameters-and-performance/evaluating-real-time-performance/#prepare-separated-cores","title":"Prepare separated cores","text":"

    Isolated cores running the node to be measured must meet the following conditions.

    • Fix CPU frequency and disable turbo boost
    • Minimize timer interruptions
    • Offload RCU (Read Copy Update) callback
    • Isolate the paired core if hyper-threading enabled

    To fulfill these conditions on Linux, a custom kernel build with the following kernel configurations is required. You can find many resources to instruct you on how to build a custom Linux kernel (like this one). Note that even if full tickless is enabled, timer interrupts are generated for scheduling if more than two tasks exist in one core.

    # Enable CONFIG_NO_HZ_FULL\n-> General setup\n-> Timers subsystem\n-> Timer tick handling (Full dynticks system (tickless))\n(X) Full dynticks system (tickless)\n\n# Allows RCU callback processing to be offloaded from selected CPUs\n# (CONFIG_RCU_NOCB_CPU=y)\n-> General setup\n-> RCU Subsystem\n-*- Offload RCU callback processing from boot-selected CPUs\n

    Additionally, the kernel boot parameters need to be set as follows.

    GRUB_CMDLINE_LINUX_DEFAULT=\n  \"... isolcpus=2,8 rcu_nocbs=2,8 rcu_nocb_poll nohz_full=2,8 intel_pstate=disable\u201d\n

    In the above configuration, for example, the node to be measured is assumed to run on core 2, and core 8, which is a hyper-threading pair, is also being isolated. Appropriate decisions on which cores to run the measurement target and which nodes to isolate need to be made based on the cache and core layout of the measurement machine. You can easily check if it is properly configured by running cat /proc/softirqs. Since intel_pstate=disable is specified in the kernel boot parameter, userspace can be specified in the scaling governor.

    cat /sys/devices/system/cpu/cpu2/cpufreq/scaling_governor // ondemand\nsudo sh -c \"echo userspace > /sys/devices/system/cpu/cpu2/cpufreq/scaling_governor\"\n

    This allows you to freely set the desired frequency within a defined range.

    sudo sh -c \"echo <freq(kz)> > /sys/devices/system/cpu/cpu2/cpufreq/scaling_setspeed\"\n

    Turbo Boost needs to be switched off on Intel CPUs, which is often overlooked.

    sudo sh -c \"echo 0 > /sys/devices/system/cpu/cpufreq/boost\"\n
    "},{"location":"how-to-guides/integrating-autoware/tuning-parameters-and-performance/evaluating-real-time-performance/#run-single-node-separately","title":"Run single node separately","text":"

    Following the instructions in the ros2_single_node_replayer README, start the node and play the dedicated rosbag created by the tool. Before playing the rosbag, appropriately set the CPU affinity of the thread on which the node runs, so it is placed on the isolated core prepared.

    taskset --cpu-list -p <target cpu> <pid>\n

    To avoid interference in the last level cache, minimize the number of other applications running during the measurement.

    "},{"location":"how-to-guides/integrating-autoware/tuning-parameters-and-performance/evaluating-real-time-performance/#measurement-and-visualization","title":"Measurement and visualization","text":"

    To visualize the performance of the measurement target, embed code for logging timestamps and performance counter values in the target source code. To achieve this objective, a tool called pmu_analyzer has been prepared.

    Details on how to use the tool can be found in the README. This tool can measure the turnaround time of any section in the source code, as well as various performance counters.

    "},{"location":"how-to-guides/integrating-autoware/tuning-parameters-and-performance/evaluating-real-time-performance/#case-studies","title":"Case studies","text":"

    In this section, we will present several case studies that demonstrate the performance improvements. These examples not only showcase our commitment to enhancing the system's efficiency but also serve as a valuable resource for developers who may face similar challenges in their own projects. The performance improvements discussed here span various components of the Autoware system, including sensing modules and planning modules. There are tendencies for each component regarding which points are becoming bottlenecks. By examining the methods, techniques, and tools employed in these case studies, readers can gain a better understanding of the practical aspects of optimizing complex software systems like Autoware.

    "},{"location":"how-to-guides/integrating-autoware/tuning-parameters-and-performance/evaluating-real-time-performance/#sensing-component","title":"Sensing component","text":"

    First, we will explain the procedure for performance improvement, taking the node ring_outlier_filter as an example. Refer to the Pull Request for details.

    The following figure is a time-series plot of the turnaround time of the main processing part of ring_outlier_filter, analyzed as described in the \"Performance Measurement\" section above.

    The horizontal axis indicates the number of callbacks called (i.e., callback index), and the vertical axis indicates the turnaround time.

    When analyzing the performance of the sensing module from the viewpoint of performance counter, pay attention to instructions, LLC-load-misses, LLC-store-misses, cache-misses, and minor-faults.

    Analysis of the performance counter shows that the largest fluctuations come from minor-faults (i.e., soft page faults), the second largest from LLC-store-misses and LLC-load-misses (i.e., cache misses in the last level cache), and the slowest fluctuations come from instructions (i.e., message data size fluctuations). For example, when we plot minor-faults on the horizontal axis and turnaround time on the vertical axis, we can see the following dominant proportional relationship.

    To achieve zero soft page faults, heap allocations must only be made from areas that have been first touched in advance. We have developed a library called heaphook to avoid soft page faults while running Autoware callback. If you are interested, refer to the GitHub discussion and the issue.

    To reduce LLC misses, it is necessary to reduce the working set and to use cache-efficient access patterns.

    In the sensing component, which handles large message data such as LiDAR point cloud data, minimizing copying is important. A callback that takes sensor data message types as input and output should be written in an in-place algorithm as much as possible. This means that in the following pseudocode, when generating output_msg from input_msg, it is crucial to avoid using buffers as much as possible to reduce the number of memory copies.

    void callback(const PointCloudMsg &input_msg) {\nauto output_msg = allocate_msg<PointCloudMsg>(output_size);\nfill(input_msg, output_msg);\npublish(std::move(output_msg));\n}\n

    To improve cache efficiency, implement an in-place style as much as possible, instead of touching memory areas sporadically. In ROS applications using PCL, the code shown below is often seen.

    void callback(const sensor_msgs::PointCloud2ConstPtr &input_msg) {\npcl::PointCloud<PointT>::Ptr input_pcl(new pcl::PointCloud<PointT>);\npcl::fromROSMsg(*input_msg, *input_pcl);\n\n// Algorithm is described for point cloud type of pcl\npcl::PointCloud<PointT>::Ptr output_pcl(new pcl::PointCloud<PointT>);\nfill_pcl(*input_pcl, *output_pcl);\n\nauto output_msg = allocate_msg<sensor_msgs::PointCloud2>(output_size);\npcl::toROSMsg(*output_pcl, *output_msg);\npublish(std::move(output_msg));\n}\n

    To use the PCL library, fromROSMsg() and toROSMsg() are used to perform message type conversion at the beginning and end of the callback. This is a wasteful copying process and should be avoided. We should eliminate unnecessary type conversions by removing dependencies on PCL (e.g., https://github.com/tier4/velodyne_vls/pull/39). For large message types such as map data, there should be only one instance in the entire system in terms of physical memory.

    "},{"location":"how-to-guides/integrating-autoware/tuning-parameters-and-performance/evaluating-real-time-performance/#planning-component","title":"Planning component","text":"

    First, we will pick up detection_area module in behavior_velocity_planner node, which tends to have long turnaround time. We have followed the performance analysis steps above to obtain the following graph. Axises are the same as the graphs in the sensing case study.

    Using pmu_analyzer tool to further identify the bottleneck, we have found that the following multiple loops were taking up a lot of processing time:

    for ( area : detection_areas )\nfor ( point : point_clouds )\nif ( boost::geometry::within(point, area) )\n// do something with O(1)\n

    It checks whether each point cloud is contained in each detection area. Let N be the size of point_clouds and M be the size of detection_areas, then the computational complexity of this program is O(N^2 * M), since the complexity of within is O(N). Here, given that most of the point clouds are located far away from a certain detection area, a certain optimization can be achieved. First, calculate the minimum enclosing circle that completely covers the detection area, and then check whether the points are contained in that circle. Most of the point clouds can be quickly ruled out by this method, we don\u2019t have to call the within function in most cases. Below is the pseudocode after optimization.

    for ( area : detection_areas )\ncircle = calc_minimum_enclosing_circle(area)\nfor ( point : point_clouds )\nif ( point is in circle )\nif ( boost::geometry::within(point, area) )\n// do something with O(1)\n

    By using O(N) algorithm for minimum enclosing circle, the computational complexity of this program is reduced to almost O(N * (N + M)) (note that the exact computational complexity does not really change). If you are interested, refer to the Pull Request.

    Similar to this example, in the planning component, we take into consideration thousands to tens of thousands of point clouds, thousands of points in a path representing our own route, and polygons representing obstacles and detection areas in the surroundings, and we repeatedly create paths based on them. Therefore, we access the contents of the point clouds and paths multiple times using for-loops. In most cases, the bottleneck lies in these naive for-loops. Here, understanding Big O notation and reducing the order of computational complexity directly leads to performance improvements.

    "},{"location":"how-to-guides/others/add-a-custom-ros-message/","title":"Add a custom ROS message","text":""},{"location":"how-to-guides/others/add-a-custom-ros-message/#add-a-custom-ros-message","title":"Add a custom ROS message","text":""},{"location":"how-to-guides/others/add-a-custom-ros-message/#overview","title":"Overview","text":"

    During the Autoware development, you will probably need to define your own messages. Read the following instructions before adding a custom message.

    1. Message in autoware_msgs define interfaces of Autoware Core.

      • If a contributor wishes to make changes or add new messages to autoware_msgs, they should first create a new discussion post under the Design category.
    2. Any other minor or proposal messages used for internal communication within a component(such as planning) should be defined in another repository.

      • tier4_autoware_msgs is an example of that.

    The following is a simple tutorial of adding a message package to autoware_msgs. For the general ROS 2 tutorial, see Create custom msg and srv files.

    "},{"location":"how-to-guides/others/add-a-custom-ros-message/#how-to-create-custom-message","title":"How to create custom message","text":"

    Make sure you are in the Autoware workspace, and then run the following command to create a new package. As an example, let's create a package to define sensor messages.

    1. Create a package

      cd ./src/core/autoware_msgs\nros2 pkg create --build-type ament_cmake autoware_sensing_msgs\n
    2. Create custom messages

      You should create .msg files and place them in the msg directory.

      NOTE: The initial letters of the .msg and .srv files must be capitalized.

      As an example, let's make .msg files GnssInsOrientation.msg and GnssInsOrientationStamped.msg to define GNSS/INS orientation messages:

      mkdir msg\ncd msg\ntouch GnssInsOrientation.msg\ntouch GnssInsOrientationStamped.msg\n

      Edit GnssInsOrientation.msg with your editor to be the following content:

      geometry_msgs/Quaternion orientation\nfloat32 rmse_rotation_x\nfloat32 rmse_rotation_y\nfloat32 rmse_rotation_z\n

      In this case, the custom message uses a message from another message package geometry_msgs/Quaternion.

      Edit GnssInsOrientationStamped.msg with your editor to be the following content:

      std_msgs/Header header\nGnssInsOrientation orientation\n

      In this case, the custom message uses a message from another message package std_msgs/Header.

    3. Edit CMakeLists.txt

      In order to use this custom message in C++ or Python languages, we need to add the following lines to CMakeList.txt:

      rosidl_generate_interfaces(${PROJECT_NAME}\n\"msg/GnssInsOrientation.msg\"\n\"msg/GnssInsOrientationStamped.msg\"\nDEPENDENCIES\ngeometry_msgs\nstd_msgs\nADD_LINTER_TESTS\n)\n

      The ament_cmake_auto tool is very useful and is more widely used in Autoware, so we recommend using ament_cmake_auto instead of ament_cmake.

      We need to replace

      find_package(ament_cmake REQUIRED)\n\nament_package()\n

      with

      find_package(ament_cmake_auto REQUIRED)\n\nament_auto_package()\n
    4. Edit package.xml

      We need to declare relevant dependencies in package.xml. For the above example we need to add the following content:

      <buildtool_depend>rosidl_default_generators</buildtool_depend>\n\n<exec_depend>rosidl_default_runtime</exec_depend>\n\n<depend>geometry_msgs</depend>\n<depend>std_msgs</depend>\n\n<member_of_group>rosidl_interface_packages</member_of_group>\n

      We need to replace <buildtool_depend>ament_cmake</buildtool_depend> with <buildtool_depend>ament_cmake_auto</buildtool_depend> in the package.xml file.

    5. Build the custom message package

      You can build the package in the root of your workspace, for example by running the following command:

      colcon build --packages-select autoware_sensing_msgs\n

      Now the GnssInsOrientationStamped message will be discoverable by other packages in Autoware.

    "},{"location":"how-to-guides/others/add-a-custom-ros-message/#how-to-use-custom-messages-in-autoware","title":"How to use custom messages in Autoware","text":"

    You can use the custom messages in Autoware by following these steps:

    • Add dependency in package.xml.
      • For example, <depend>autoware_sensing_msgs</depend>.
    • Include the .hpp file of the relevant message in the code.
      • For example, #include <autoware_sensing_msgs/msg/gnss_ins_orientation_stamped.hpp>.
    "},{"location":"how-to-guides/others/advanced-usage-of-colcon/","title":"Advanced usage of colcon","text":""},{"location":"how-to-guides/others/advanced-usage-of-colcon/#advanced-usage-of-colcon","title":"Advanced usage of colcon","text":"

    This page shows some advanced and useful usage of colcon. If you need more detailed information, refer to the colcon documentation.

    "},{"location":"how-to-guides/others/advanced-usage-of-colcon/#common-mistakes","title":"Common mistakes","text":""},{"location":"how-to-guides/others/advanced-usage-of-colcon/#do-not-run-from-other-than-the-workspace-root","title":"Do not run from other than the workspace root","text":"

    It is important that you always run colcon build from the workspace root because colcon builds only under the current directory. If you have mistakenly built in a wrong directory, run rm -rf build/ install/ log/ to clean the generated files.

    "},{"location":"how-to-guides/others/advanced-usage-of-colcon/#do-not-unnecessarily-overlay-workspaces","title":"Do not unnecessarily overlay workspaces","text":"

    colcon overlays workspaces if you have sourced the setup.bash of other workspaces before building a workspace. You should take care of this especially when you have multiple workspaces.

    Run echo $COLCON_PREFIX_PATH to check whether workspaces are overlaid. If you find some workspaces are unnecessarily overlaid, remove all built files, restart the terminal to clean environment variables, and re-build the workspace.

    For more details about workspace overlaying, refer to the ROS 2 documentation.

    "},{"location":"how-to-guides/others/advanced-usage-of-colcon/#cleaning-up-the-build-artifacts","title":"Cleaning up the build artifacts","text":"

    colcon sometimes causes errors of because of the old cache. To remove the cache and rebuild the workspace, run the following command:

    rm -rf build/ install/\n

    In case you know what packages to remove:

    rm -rf {build,install}/{package_a,package_b}\n
    "},{"location":"how-to-guides/others/advanced-usage-of-colcon/#selecting-packages-to-build","title":"Selecting packages to build","text":"

    To just build specified packages:

    colcon build --packages-select <package_name1> <package_name2> ...\n

    To build specified packages and their dependencies recursively:

    colcon build --packages-up-to <package_name1> <package_name2> ...\n

    You can also use these options for colcon test.

    "},{"location":"how-to-guides/others/advanced-usage-of-colcon/#changing-the-optimization-level","title":"Changing the optimization level","text":"

    Set DCMAKE_BUILD_TYPE to change the optimization level.

    Warning

    If you specify DCMAKE_BUILD_TYPE=Debug or no DCMAKE_BUILD_TYPE is given for building the entire Autoware, it may be too slow to use.

    colcon build --cmake-args -DCMAKE_BUILD_TYPE=Debug\n
    colcon build --cmake-args -DCMAKE_BUILD_TYPE=RelWithDebInfo\n
    colcon build --cmake-args -DCMAKE_BUILD_TYPE=Release\n
    "},{"location":"how-to-guides/others/advanced-usage-of-colcon/#changing-the-default-configuration-of-colcon","title":"Changing the default configuration of colcon","text":"

    Create $COLCON_HOME/defaults.yaml to change the default configuration.

    mkdir -p ~/.colcon\ncat << EOS > ~/.colcon/defaults.yaml\n{\n\"build\": {\n\"symlink-install\": true\n}\n}\n

    For more details, see here.

    "},{"location":"how-to-guides/others/advanced-usage-of-colcon/#generating-compile_commandsjson","title":"Generating compile_commands.json","text":"

    compile_commands.json is used by IDEs/tools to analyze the build dependencies and symbol relationships.

    You can generate it with the flag DCMAKE_EXPORT_COMPILE_COMMANDS=1:

    colcon build --cmake-args -DCMAKE_EXPORT_COMPILE_COMMANDS=1\n
    "},{"location":"how-to-guides/others/advanced-usage-of-colcon/#seeing-compiler-commands","title":"Seeing compiler commands","text":"

    To see the compiler and linker invocations for a package, use VERBOSE=1 and --event-handlers console_cohesion+:

    VERBOSE=1 colcon build --packages-up-to <package_name> --event-handlers console_cohesion+\n

    For other options, see here.

    "},{"location":"how-to-guides/others/advanced-usage-of-colcon/#using-ccache-to-speed-up-recompilation","title":"Using Ccache to speed up recompilation","text":"

    Ccache is a compiler cache that can significantly speed up recompilation by caching previous compilations and reusing them when the same compilation is being done again. It's highly recommended for developers looking to optimize their build times, unless there's a specific reason to avoid it.

    "},{"location":"how-to-guides/others/advanced-usage-of-colcon/#step-1-install-ccache","title":"Step 1: Install Ccache","text":"
    sudo apt update && sudo apt install ccache\n
    "},{"location":"how-to-guides/others/advanced-usage-of-colcon/#step-2-configure-ccache","title":"Step 2: Configure Ccache","text":"
    1. Create the Ccache configuration folder and file:

      mkdir -p ~/.cache/ccache\ntouch ~/.cache/ccache/ccache.conf\n
    2. Set the maximum cache size. The default size is 5GB, but you can increase it depending on your needs. Here, we're setting it to 60GB:

      echo \"max_size = 60G\" >> ~/.cache/ccache/ccache.conf\n
    "},{"location":"how-to-guides/others/advanced-usage-of-colcon/#step-3-integrate-ccache-with-your-environment","title":"Step 3: Integrate Ccache with Your Environment","text":"

    To ensure Ccache is used for compilation, add the following lines to your .bashrc file. This will redirect GCC and G++ calls through Ccache.

    export CC=\"/usr/lib/ccache/gcc\"\nexport CXX=\"/usr/lib/ccache/g++\"\nexport CCACHE_DIR=\"$HOME/.cache/ccache/\"\n

    After adding these lines, reload your .bashrc or restart your terminal session to apply the changes.

    "},{"location":"how-to-guides/others/advanced-usage-of-colcon/#step-4-verify-ccache-is-working","title":"Step 4: Verify Ccache is Working","text":"

    To confirm Ccache is correctly set up and being used, you can check the statistics of cache usage:

    ccache -s\n

    This command displays the cache hit rate and other relevant statistics, helping you gauge the effectiveness of Ccache in your development workflow.

    "},{"location":"how-to-guides/others/an-example-procedure-for-adding-and-evaluating-a-new-node/","title":"An example procedure for adding and evaluating a new node","text":""},{"location":"how-to-guides/others/an-example-procedure-for-adding-and-evaluating-a-new-node/#an-example-procedure-for-adding-and-evaluating-a-new-node","title":"An example procedure for adding and evaluating a new node","text":""},{"location":"how-to-guides/others/an-example-procedure-for-adding-and-evaluating-a-new-node/#overview","title":"Overview","text":"

    This page provides a guide for evaluating Autoware when a new node is implemented, especially about developing a novel localization node.

    The workflow involves initial testing and rosbag recording using a real vehicle or AWSIM, implementing the new node, subsequent testing using the recorded rosbag, and finally evaluating with a real vehicle or AWSIM.

    It is assumed that the method intended for addition has already been verified well with public datasets and so on.

    "},{"location":"how-to-guides/others/an-example-procedure-for-adding-and-evaluating-a-new-node/#1-running-autoware-in-its-standard-configuration","title":"1. Running Autoware in its standard configuration","text":"

    First of all, it is important to be able to run the standard Autoware to establish a basis for performance and behavior comparison.

    Autoware constantly incorporates new features. It is crucial to initially confirm that it operates as expected with the current version, which helps in problem troubleshooting.

    In this context, AWSIM is presumed. Therefore, AWSIM simulator can be useful. If you are using actual hardware, please refer to the How-to guides.

    "},{"location":"how-to-guides/others/an-example-procedure-for-adding-and-evaluating-a-new-node/#2-recording-a-rosbag-using-autoware","title":"2. Recording a rosbag using Autoware","text":"

    Before developing a new node, it is recommended to record a rosbag in order to evaluate. If you need a new sensor, you should add it to your vehicle or AWSIM.

    In this case, it is recommended to save all topics regardless of whether they are necessary or not. For example, in Localization, since the initial position estimation service is triggered by the input to rviz and the GNSS topic, the initial position estimation does not start when playing back data unless those topics are saved.

    Consider the use of the mcap format if data capacity becomes a concern.

    It is worth noting that using ros2 bag record increases computational load and might affect performance. After data recording, verifying the smooth flow of sensor data and unchanged time series is advised. This verification can be accomplished, for example, by inspecting the image data with rqt_image_view during ros2 bag play.

    "},{"location":"how-to-guides/others/an-example-procedure-for-adding-and-evaluating-a-new-node/#3-developing-the-new-node","title":"3. Developing the new node","text":"

    When developing a new node, it could be beneficial to reference a package that is similar to the one you intend to create.

    It is advisable to thoroughly read the Design page, contemplate the addition or replacement of nodes in Autoware, and then implement your solution.

    For example, a node doing NDT, a LiDAR-based localization method, is ndt_scan_matcher. If you want to replace this with a different approach, implement a node which produces the same topics and provides the same services.

    ndt_scan_matcher is launched as pose_estimator, so it is necessary to replace the launch file as well.

    "},{"location":"how-to-guides/others/an-example-procedure-for-adding-and-evaluating-a-new-node/#4-evaluating-by-a-rosbag-based-simulator","title":"4. Evaluating by a rosbag-based simulator","text":"

    Once the new node is implemented, it is time to evaluate it. logging_simulator is a tool of how to evaluate the new node using the rosbag captured in step 2.

    When you run the logging_simulator, you can set planning:=false or control:=false to disable the launch of specific component nodes.

    ros2 launch autoware_launch logging_simulator.launch.xml ... planning:=false control:=false

    After launching logging_simulator, the rosbag file obtained in step 2 should be replayed using ros2 bag play <rosbag_file>.

    If you remap the topics related to the localization that you want to verify this time, Autoware will use the data it is calculating this time instead of the data it recorded. Also, using the --topics option of ros2 bag play, you can publish only specific topics in rosbag.

    There is ros2bag_extensions available to filter the rosbag file and create a new rosbag file that contains only the topics you need.

    "},{"location":"how-to-guides/others/an-example-procedure-for-adding-and-evaluating-a-new-node/#5-evaluating-in-a-realtime-environment","title":"5. Evaluating in a realtime environment","text":"

    Once you have sufficiently verified the behavior in the logging_simulator, let's run it as Autoware with new nodes added in the realtime environment.

    To debug Autoware, the method described at debug-autoware is useful.

    For reproducibility, you may want to fix the GoalPose. In such cases, consider using the tier4_automatic_goal_rviz_plugin.

    "},{"location":"how-to-guides/others/an-example-procedure-for-adding-and-evaluating-a-new-node/#6-sharing-the-results","title":"6. Sharing the results","text":"

    If your implementation works successfully, please consider a pull request to Autoware.

    It is also a good idea to start by presenting your ideas in Discussion at Show and tell.

    For localization, YabLoc's Proposal may provide valuable insights.

    "},{"location":"how-to-guides/others/applying-clang-tidy-to-ros-packages/","title":"Applying Clang-Tidy to ROS packages","text":""},{"location":"how-to-guides/others/applying-clang-tidy-to-ros-packages/#applying-clang-tidy-to-ros-packages","title":"Applying Clang-Tidy to ROS packages","text":"

    Clang-Tidy is a powerful C++ linter.

    "},{"location":"how-to-guides/others/applying-clang-tidy-to-ros-packages/#preparation","title":"Preparation","text":"

    You need to generate build/compile_commands.json before using Clang-Tidy.

    colcon build --cmake-args -DCMAKE_EXPORT_COMPILE_COMMANDS=1\n
    "},{"location":"how-to-guides/others/applying-clang-tidy-to-ros-packages/#usage","title":"Usage","text":"
    clang-tidy -p build/ path/to/file1 path/to/file2 ...\n

    If you want to apply Clang-Tidy to all files in a package, using the fd command is useful. To install fd, see the installation manual.

    clang-tidy -p build/ $(fd -e cpp -e hpp --full-path \"/autoware_utils/\")\n
    "},{"location":"how-to-guides/others/applying-clang-tidy-to-ros-packages/#ide-integration","title":"IDE integration","text":""},{"location":"how-to-guides/others/applying-clang-tidy-to-ros-packages/#clion","title":"CLion","text":"

    Refer to the CLion Documentation.

    "},{"location":"how-to-guides/others/applying-clang-tidy-to-ros-packages/#visual-studio-code","title":"Visual Studio Code","text":"

    Use either one of the following extensions:

    • C/C++
    • clangd
    "},{"location":"how-to-guides/others/applying-clang-tidy-to-ros-packages/#troubleshooting","title":"Troubleshooting","text":"

    If you encounter clang-diagnostic-error, try installing libomp-dev.

    Related: https://github.com/autowarefoundation/autoware-github-actions/pull/172

    "},{"location":"how-to-guides/others/debug-autoware/","title":"Debug Autoware","text":""},{"location":"how-to-guides/others/debug-autoware/#debug-autoware","title":"Debug Autoware","text":"

    This page provides some methods for debugging Autoware.

    "},{"location":"how-to-guides/others/debug-autoware/#print-debug-messages","title":"Print debug messages","text":"

    The essential thing for debug is to print the program information clearly, which can quickly judge the program operation and locate the problem. Autoware uses ROS 2 logging tool to print debug messages, how to design console logging refer to tutorial Console logging.

    "},{"location":"how-to-guides/others/debug-autoware/#using-ros-tools-debug-autoware","title":"Using ROS tools debug Autoware","text":""},{"location":"how-to-guides/others/debug-autoware/#using-command-line-tools","title":"Using command line tools","text":"

    ROS 2 includes a suite of command-line tools for introspecting a ROS 2 system. The main entry point for the tools is the command ros2, which itself has various sub-commands for introspecting and working with nodes, topics, services, and more. How to use the ROS 2 command line tool refer to tutorial CLI tools.

    "},{"location":"how-to-guides/others/debug-autoware/#using-rviz2","title":"Using rviz2","text":"

    Rviz2 is a port of Rviz to ROS 2. It provides a graphical interface for users to view their robot, sensor data, maps, and more. You can run Rviz2 tool easily by:

    rviz2\n

    When Autoware launch the simulators, the Rviz2 tool is opened by default to visualize the autopilot graphic information.

    "},{"location":"how-to-guides/others/debug-autoware/#using-rqt-tools","title":"Using rqt tools","text":"

    RQt is a graphical user interface framework that implements various tools and interfaces in the form of plugins. You can run any RQt tools/plugins easily by:

    rqt\n

    This GUI allows you to choose any available plugins on your system. You can also run plugins in standalone windows. For example, RQt Console:

    ros2 run rqt_console rqt_console\n
    "},{"location":"how-to-guides/others/debug-autoware/#common-rqt-tools","title":"Common RQt tools","text":"
    1. rqt_graph: view node interaction

      In complex applications, it may be helpful to get a visual representation of the ROS node interactions.

      ros2 run rqt_graph rqt_graph\n
    2. rqt_console: view messages

      rqt_console is a great gui for viewing ROS topics.

      ros2 run rqt_console rqt_console\n
    3. rqt_plot: view data plots

      rqt_plot is an easy way to plot ROS data in real time.

      ros2 run rqt_plot rqt_plot\n
    "},{"location":"how-to-guides/others/debug-autoware/#using-ros2_graph","title":"Using ros2_graph","text":"

    ros2_graph can be used to generate mermaid description of ROS 2 graphs to add on your markdown files.

    It can also be used as a colorful alternative to rqt_graph even though it would require some tool to render the generated mermaid diagram.

    It can be installed with:

    pip install ros2-graph\n

    Then you can generate a mermaid description of the graph with:

    ros2_graph your_node\n\n# or like with an output file\nros2_graph /turtlesim -o turtle_diagram.md\n\n# or multiple nodes\nros2_graph /turtlesim /teleop_turtle\n

    You can then visualize these graphs with:

    • Mermaid Live Editor
    • Visual Studio Code extension mermaid preview
    • JetBrains IDEs with native support
    "},{"location":"how-to-guides/others/debug-autoware/#using-ros2doctor","title":"Using ros2doctor","text":"

    When your ROS 2 setup is not running as expected, you can check its settings with the ros2doctor tool.

    ros2doctor checks all aspects of ROS 2, including platform, version, network, environment, running systems and more, and warns you about possible errors and reasons for issues.

    It's as simple as just running ros2 doctor in your terminal.

    It has the ability to list \"Subscribers without publishers\" for all topics in the system.

    And this information can help you find if a necessary node isn't running.

    For more details, see the following official documentation for Using ros2doctor to identify issues.

    "},{"location":"how-to-guides/others/debug-autoware/#using-a-debugger-with-breakpoints","title":"Using a debugger with breakpoints","text":"

    Many IDE(e.g. Visual Studio Code, CLion) supports debugging C/C++ executable with GBD on linux platform. The following lists some references for using the debugger:

    • https://code.visualstudio.com/docs/cpp/cpp-debug
    • https://www.jetbrains.com/help/clion/debugging-code.html#useful-debugger-shortcuts
    "},{"location":"how-to-guides/others/defining-temporal-performance-metrics/","title":"Defining temporal performance metrics on components","text":""},{"location":"how-to-guides/others/defining-temporal-performance-metrics/#defining-temporal-performance-metrics-on-components","title":"Defining temporal performance metrics on components","text":""},{"location":"how-to-guides/others/defining-temporal-performance-metrics/#motivation-to-defining-temporal-performance-metrics","title":"Motivation to defining temporal performance metrics","text":""},{"location":"how-to-guides/others/defining-temporal-performance-metrics/#objective-of-the-page","title":"Objective of the page","text":"

    This page introduces policies to define metrics to evaluate temporal performance on components of Autoware. The term \"temporal performance\" is often used throughout the page in order to distinguish between functional performance, which referred to as accuracy as well, and time-related performance.

    It is expected that most algorithms employed for Autoware are executed with as high frequency and short response time as possible. In order to achieve safe autonomous driving, one of the desired outcomes is no time gap between perceived and actual situation. The time gap is commonly referred to as delay. If the delay is significant, the system may determine trajectory and maneuver based on outdated situation. Consequently, if the actual situation differs from the perceived one due to the delay, the system may make unexpected decisions.

    As mentioned above, this page presents the policies to define metrics. Besides, the page contains lists of sample metrics that are crucial for the main functionalities of Autoware: Localization, Perception, Planning, and Control.

    Note

    Other functionalities, such as system components for diagnosis, are excluded currently. However they will be taken into account in the near future.

    "},{"location":"how-to-guides/others/defining-temporal-performance-metrics/#contribution-of-the-temporal-performance-metrics","title":"Contribution of the temporal performance metrics","text":"

    Temporal performance metrics are important for evaluating Autoware. These metrics are particularly useful for assessing delays caused by new algorithms and logic. They can be employed when comparing the temporal performance of software on a desktop computer with that on a vehicle during the vehicle integration phase.

    In addition, these metrics are useful for designers and evaluators of middleware, operating systems, and computers. They are selected based on user and product requirements. One of these requirements is to provide sufficient temporal performance for executing Autoware. \"Sufficient temporal performance\" is defined as a temporal performance requirement, but it can be challenging to define the requirement because it varies depending on the product type, Operational Design Domain (ODD), and other factors. Then, this page specifically focuses on temporal performance metrics rather than requirements.

    Temporal performance metrics are important for evaluating the reliability of Autoware. However, ensuring the reliability of Autoware requires consideration of not only temporal performance metrics but also other metrics.

    "},{"location":"how-to-guides/others/defining-temporal-performance-metrics/#tools-for-evaluating-the-metrics","title":"Tools for evaluating the metrics","text":"

    There are several tools available for evaluating Autoware according to the metrics listed in the page. For example, both CARET and ros2_tracing are recommended options when evaluating Autoware on Linux and ROS 2. If you want to measure the metrics with either of these tools, refer to the corresponding user guide for instructions. It's important to note that if you import Autoware to a platform other than Linux and ROS 2, you will need to choose a supported tool for evaluation.

    Note

    TIER IV plans to measure Autoware, which is running according to the tutorial, and provide a performance evaluation report periodically. An example of such a report can be found here, although it may not include all of the metrics listed.

    The page does not aim to provide instructions on how to use these tools or measure the metrics. Its primary focus is on the metrics themselves, as they are more important than the specific tools used. These metrics retain their relevance regardless of the employed platform.

    "},{"location":"how-to-guides/others/defining-temporal-performance-metrics/#policies-to-define-temporal-performance-metrics","title":"Policies to define temporal performance metrics","text":"

    As mentioned above, the configuration of Autoware varies by the product type, ODD, and other factors. The variety of configurations makes it difficult to define the uniform metrics for evaluating Autoware. However, the policies used to define them are basically reused even when the configuration changes. Each of temporal performance metrics is categorized into two types: execution frequency and response time. Although there are many types of metrics, such as communication latency, the only two types are considered for simplicity. Execution frequency is observed using rate of Inter-Process Communication (IPC) messages. You will find an enormous number of messages in Autoware, but you don't have to take care of all. Some messages might be critical to functionality and they should be chosen for evaluation. Response time is duration elapsed through a series of processing. A series of processing is referred to as a path. Response time is calculated from timestamps of start and end of a path. Although many paths can be defined in Autoware, you have to choose significant paths.

    As a hint, here are some characteristics of message and path in order to choose metrics.

    1. Messages and paths on boundaries where observed values from sensors are consumed
    2. Messages and paths on boundaries of functions, e.g., a boundary of perception and planning
    3. Messages and paths on boundaries where timer-based frequency is switched
    4. Messages and paths on boundaries where two different messages are synchronized and merged
    5. Messages that must be transmitted at expected frequency, e.g., vehicle command messages

    Those hints would be helpful for most configurations but there may be exclusions. Defining metrics precisely requires an understanding of configuration.

    In addition, it is recommended that metrics be determined incrementally from the architectural level to the detailed design and implementation level. Mixing metrics at different levels of granularity can be confusing.

    "},{"location":"how-to-guides/others/defining-temporal-performance-metrics/#list-of-sample-metrics","title":"List of sample metrics","text":"

    This section demonstrates how to define metrics according to the policies explained and has lists of the metrics for Autoware launched according to the tutorial. The section is divided into multiple subsections, each containing a model diagram and an accompanying list that explains the important temporal performance metrics. Each model is equipped with checkpoints that serve as indicators for these metrics.

    The first subsection presents the top-level temporal performance metrics, which are depicted in the abstract structure of Autoware as a whole. The detailed metrics are not included in the model as they would add complexity to it. Instead, the subsequent section introduces the detailed metrics. The detailed metrics are subject to more frequent updates compared to the top-level ones, which is another reason for categorizing them separately.

    Each list includes a column for the reference value. The reference value represents the observed value of each metric when Autoware is running according to the tutorial. It is important to note that the reference value is not a required value, meaning that Autoware does not necessarily fail in the tutorial execution if certain metrics do not fulfill the reference value.

    "},{"location":"how-to-guides/others/defining-temporal-performance-metrics/#top-level-temporal-performance-metrics-for-autoware","title":"Top-level temporal performance metrics for Autoware","text":"

    The diagram below introduces the model for top-level temporal performance metrics.

    The following three policies assist in selecting the top-level performance metrics:

    • Splitting Autoware based on components that consume observed values, such as sensor data, and considering the processing frequency and response time around these components
    • Dividing Autoware based on the entry point of Planning and Control and considering the processing frequency and response time around these components
    • Showing the minimum metrics for the Vehicle Interface, as they may vary depending on the target vehicle

    Additionally, it is assumed that algorithms are implemented as multiple nodes and function as a pipeline processing system.

    ID Representation in the model Metric meaning Related functionality Reference value Reason to choose it as a metric Note AWOV-001 Message rate from CPA #9 to CPA #18 Update rate of result from Prediction to Planning. Perception 10 Hz Planning relies on fresh and up-to-date perceived data from Perception for creating accurate trajectory. AWOV-002 Response time from CPA #0 to CPA #20 via CPA #18 Response time in main body of Perception. Perception N/A Planning relies on fresh and up-to-date perceived data from Perception for creating accurate trajectory. The metric is used if delay compensation is disabled in Tracking. AWOV-003 Response time from CPA #7 to CPA #20 Response time from Tracking output of Tracking to its data consumption in Planning. Perception N/A Planning relies on fresh and up-to-date perceived data from Perception for creating accurate trajectory. The metric is used if delay compensation is enabled in Tracking. AWOV-004 Response time from CPA #0 to CPA #6 Duration to process pointcloud data in Sensing and Detection. Perception N/A Tracking relies on detection to provide real-time and up-to-date sensed data for accurate tracking. The metric is used if delay compensation is enabled in Tracking. AWOV-005 Message rate from CPA #4 to CPA #5 Update rate of Detection result received by Tracking. Perception 10 Hz Tracking relies on detection to provide real-time and up-to-date sensed data for accurate tracking. AWOV-006 Response time from CPA #0 to CPA #14 Response time from output of observed data from LiDARs to its consumption in EKF Localizer via NDT Scan Matcher. Localization N/A EKF Localizer relies on fresh and up-to-date observed data from sensors for accurate estimation of self pose. AWOV-007 Message rate from CPA #11 to CPA #13 Update rate of pose estimated by NDT Scan Matcher. Localization 10 Hz EKF Localizer relies on fresh and up-to-date observed data from sensors for accurate estimation of self pose. AWOV-008 Message rate from CPA #15 to CPA #12 Update rate of feed backed pose estimated by EKF Localizer. Localization 50 Hz NDT Scan Matcher relies on receiving estimated pose from EKF Localizer smoothly for linear interpolation. AWOV-009 Message rate from CPA #17 to CPA #19 Update rate of Localization result received by Planning. Localization 50 Hz Planning relies on Localization to update the estimated pose frequently. AWOV-010 Response time from CPA #20 to CPA #23 Processing time from beginning of Planning to consumption of Trajectory message in Control. Planning N/A A vehicle relies on Planning to update trajectory within a short time frame to achieve safe driving behavior. AWOV-011 Message rate from CPA #21 to CPA #22 Update rate of Trajectory message from Planning. Planning 10 Hz A vehicle relies on Planning to update trajectory frequently to achieve safe driving behavior. AWOV-012 Message rate from CPA #24 to CPA #25 Update rate of Control command. Control 33 Hz Control stability and comfort relies on sampling frequency of Control. AWOV-013 Message rate between CPA #26 and Vehicle Communication rate between Autoware and Vehicle. Vehicle Interface N/A A vehicle requires Autoware to communicate with each other at predetermined frequency. Temporal performance requirement varies depending on vehicle type.

    Note

    There is an assumption that each of sensors, such as LiDARs and cameras, outputs a set of pointcloud with a timestamp. CPA #0 is observed with the timestamp. If the sensors are not configured to output the timestamp, the time when Autoware receives the pointcloud is used instead. That is represented by CPA #1 in the model. The detailed metrics employs the idea as well.

    "},{"location":"how-to-guides/others/defining-temporal-performance-metrics/#detailed-temporal-performance-metrics-for-perception","title":"Detailed temporal performance metrics for Perception","text":"

    The diagram below introduces the model for temporal performance metrics for Perception.

    The following two policies assist in selecting the performance metrics:

    • Regarding the frequency and response time at which Recognition results from Object Recognition and Traffic Light Recognition are consumed in Planning
    • Splitting Perception component on merging points of data from multiple processing paths and considering the frequency and response time around that point

    The following list shows the temporal performance metrics for Perception.

    ID Representation in the model Metric meaning Related functionality Reference value Reason to choose it as a metric Note APER-001 Message rate from CPP #2 to CPP #26 Update rate of Traffic Light Recognition. Traffic Light Recognition 10 Hz Planning relies on fresh and up-to-date perceived data from Traffic Light Recognition for making precise decisions. APER-002 Response time from CPP #0 to CPP #30 Response time from camera input to consumption of the result in Planning. Traffic Light Recognition N/A Planning relies on fresh and up-to-date perceived data from Traffic Light Recognition for making precise decisions. APER-003 Message rate from CPP #25 to CPP #28 Update rate of result from Prediction (Object Recognition) to Planning. Object Recognition 10 Hz Planning relies on fresh and up-to-date perceived data from Perception for creating accurate trajectory. The metric is same as AWOV-001. APER-004 Response time from CPP #6 to CPP #30 Response time from Tracking output of Tracking to its data consumption in Planning. Object Recognition N/A Planning relies on fresh and up-to-date perceived data from Perception for creating accurate trajectory. The metric is same as AWOV-002 and used if delay compensation is disabled in Tracking. APER-005 Response time from CPP #23 to CPP #30 Response time from Tracking output of Tracking to its data consumption in Planning. Object Recognition N/A Planning relies on fresh and up-to-date perceived data from Perception for creating accurate trajectory. The metric is same as AWOV-003 and used if delay compensation is enabled in Tracking. APER-006 Response time from CPP #6 to CPP #21 Duration to process pointcloud data in Sensing and Detection. Object Recognition N/A Tracking relies on Detection to provide real-time and up-to-date perceived data. The metrics is same as AWOV-004 and used if delay compensation is enabled in Tracking. APER-007 Message rate from CPP #20 to CPP #21 Update rate of Detection result received by Tracking. Object Recognition 10 Hz Tracking relies on detection to provide real-time and up-to-date sensed data for accurate tracking. The metric is same as AWOV-005 APER-008 Message rate from CPP #14 to CPP #19 Update rate of data sent from Sensor Fusion. Object Recognition 10 Hz Association Merger relies on the data to be updated at expected frequency for data synchronization. APER-009 Message rate from CPP #16 to CPP #19 Update rate of data sent from Detection by Tracker. Object Recognition 10 Hz Association Merger relies on the data to be updated at expected frequency for data synchronization. APER-010 Message rate from CPP #18 to CPP #19 Update rate of data sent from Validation Object Recognition. 10 Hz Association Merger relies on the data to be updated at expected frequency for data synchronization. APER-011 Response time from CPP #6 to CPP #19 via CPP #14 Response time to consume data sent from Sensor Fusion after LiDARs output pointcloud. Object Recognition N/A Association Merger relies on fresh and up-to-date data for data synchronization. APER-012 Response time from CPP #6 to CPP #19 via CPP #16 Response time to consume data sent from Detection by Tracker after LiDARs output pointcloud. Object Recognition N/A Association Merger relies on fresh and up-to-date data for data synchronization. APER-013 Response time from CPP #6 to CPP #19 via CPP #18 Response time to consume data sent from Validator after LiDARs output pointcloud. Object Recognition N/A Association Merger relies on fresh and up-to-date data for data synchronization. APER-014 Message rate from CPP #10 to CPP #13 Update rate of data sent from Clustering. Object Recognition 10 Hz Sensor Fusion relies on the data to be updated at expected frequency for data synchronization. APER-015 Message rate from CPP #5 to CPP #13 Update rate of data sent from Camera-based Object detection. Object Recognition 10 Hz Sensor Fusion relies on the data to be updated at expected frequency for data synchronization. APER-016 Response time from CPP #6 to CPP #13 Response time to consume data sent from Clustering after LiDARs output pointcloud. Object Recognition N/A Sensor Fusion relies on fresh and up-to-date data for data synchronization. APER-017 Response time from CPP #3 to CPP #13 Response time to consume data sent from Camera-based Object detection after Cameras output images. Object Recognition N/A Sensor Fusion relies on fresh and up-to-date data for data synchronization. APER-018 Message rate from CPP #10 to CPP #17 Update rate of data sent from Clustering. Object Recognition 10 Hz Validator relies on the data to be updated at expected frequency for data synchronization. It seems similar to APER-014, but the topic message is different. APER-019 Message rate from CPP #12 to CPP #17 Update rate of data sent from DNN-based Object Recognition. Object Recognition 10 Hz Validator relies on the data to be updated at expected frequency for data synchronization. APER-020 Response time from CPP #6 to CPP #17 via CPP #10 Response time to consume data sent from Clustering after LiDARs output pointcloud. Object Recognition N/A Validator relies on fresh and update-date data for data synchronization. It seems similar to APER-015, but the topic message is different. APER-021 Response time from CPP #6 to CPP #17 via CPP #12 Response time to consume data sent from DNN-based Object Recognition after LiDARs output pointcloud. Object Recognition N/A Validator relies on fresh and update-date data for data synchronization."},{"location":"how-to-guides/others/defining-temporal-performance-metrics/#detailed-temporal-performance-metrics-for-paths-between-obstacle-segmentation-and-planning","title":"Detailed temporal performance metrics for Paths between Obstacle segmentation and Planning","text":"

    Obstacle segmentation, which is a crucial part of Perception, transmits data to Planning. The figure below illustrates the model that takes into account performance metrics related to Obstacle segmentation and Planning.

    Note

    Both the Obstacle grid map and Obstacle segmentation transmit data to multiple sub-components of Planning. However, not all of these sub-components are described in the model. This is because our primary focus is on the paths from LiDAR to Planning via Obstacle segmentation.

    The following list shows the temporal performance metrics around Obstacle segmentation and Planning.

    ID Representation in the model Metric meaning Related functionality Reference value Reason to choose it as a metric Note OSEG-001 Message rate from CPS #4 to CPS #7 Update rate of Occupancy grid map received by Planning (behavior_path_planner) Obstacle segmentation 10 Hz Planning relies on Occupancy grid map to be updated frequently and smoothly for creating accurate trajectory. OSEG-002 Response time from CPS #0 to CPS #9 via CPS #7 Response time to consume Occupancy grid map after LiDARs output sensing data. Obstacle segmentation N/A Planning relies on fresh and up-to-date perceived data from Occupancy grid map for creating accurate trajectory.. OSEG-003 Message rate from CPS #6 to CPS #11 Update rate of obstacle segmentation received by Planning (behavior_velocity_planner). Obstacle segmentation 10 Hz Planning relies on Obstacle segmentation to be updated frequently and smoothly for creating accurate trajectory. OSEG-004 Response time from CPS #0 to CPS #13 via CPS #11 Response time to consume Obstacle segmentation after LiDARs output sensing data. Obstacle segmentation N/A Planning relies on fresh and up-to-date perceived data from Obstacle segmentation for creating accurate trajectory.."},{"location":"how-to-guides/others/determining-component-dependencies/","title":"Determining component dependencies","text":""},{"location":"how-to-guides/others/determining-component-dependencies/#determining-component-dependencies","title":"Determining component dependencies","text":"

    For any developers who wish to try and deploy Autoware as a microservices architecture, it is necessary to understand the software dependencies, communication, and implemented features of each ROS package/node.

    As an example, the commands necessary to determine the dependencies for the Perception component are shown below.

    "},{"location":"how-to-guides/others/determining-component-dependencies/#perception-component-dependencies","title":"Perception component dependencies","text":"

    To generate a graph of package dependencies, use the following colcon command:

    colcon graph --dot --packages-up-to tier4_perception_launch | dot -Tpng -o graph.png\n

    To generate a list of dependencies, use:

    colcon list --packages-up-to tier4_perception_launch --names-only\n
    colcon list output
    autoware_auto_geometry_msgs\nautoware_auto_mapping_msgs\nautoware_auto_perception_msgs\nautoware_auto_planning_msgs\nautoware_auto_vehicle_msgs\nautoware_cmake\nautoware_lint_common\nautoware_point_types\ncompare_map_segmentation\ndetected_object_feature_remover\ndetected_object_validation\ndetection_by_tracker\neuclidean_cluster\ngrid_map_cmake_helpers\ngrid_map_core\ngrid_map_cv\ngrid_map_msgs\ngrid_map_pcl\ngrid_map_ros\nground_segmentation\nimage_projection_based_fusion\nimage_transport_decompressor\ninterpolation\nkalman_filter\nlanelet2_extension\nlidar_apollo_instance_segmentation\nmap_based_prediction\nmulti_object_tracker\nmussp\nobject_merger\nobject_range_splitter\noccupancy_grid_map_outlier_filter\npointcloud_preprocessor\npointcloud_to_laserscan\nshape_estimation\ntensorrt_yolo\ntier4_autoware_utils\ntier4_debug_msgs\ntier4_pcl_extensions\ntier4_perception_launch\ntier4_perception_msgs\ntraffic_light_classifier\ntraffic_light_map_based_detector\ntraffic_light_ssd_fine_detector\ntraffic_light_visualization\nvehicle_info_util\n

    Tip

    To output a list of modules with their respective paths, run the command above without the --names-only parameter.

    To see which ROS topics are being subscribed and published to, use rqt_graph as follows:

    ros2 launch tier4_perception_launch perception.launch.xml mode:=lidar\nros2 run rqt_graph rqt_graph\n
    "},{"location":"how-to-guides/others/fixing-dependent-package-versions/","title":"Fixing dependent package versions","text":""},{"location":"how-to-guides/others/fixing-dependent-package-versions/#fixing-dependent-package-versions","title":"Fixing dependent package versions","text":"

    Autoware manages dependent package versions in autoware.repos. For example, let's say you make a branch in autoware.universe and add new features. Suppose you update other dependencies with vcs pull after cutting a branch from autoware.universe. Then the version of autoware.universe you are developing and other dependencies will become inconsistent, and the entire Autoware build will fail. We recommend saving the dependent package versions by executing the following command when starting the development.

    vcs export src --exact > my_autoware.repos\n
    "},{"location":"how-to-guides/others/lane-detection-methods/","title":"Lane Detection Methods","text":""},{"location":"how-to-guides/others/lane-detection-methods/#lane-detection-methods","title":"Lane Detection Methods","text":""},{"location":"how-to-guides/others/lane-detection-methods/#overview","title":"Overview","text":"

    This document describes some of the most common lane detection methods used in the autonomous driving industry. Lane detection is a crucial task in autonomous driving, as it is used to determine the boundaries of the road and the vehicle's position within the lane.

    "},{"location":"how-to-guides/others/lane-detection-methods/#methods","title":"Methods","text":"

    This document covers the methods under two categories: lane detection methods and multitask detection methods.

    Note

    The results have been obtained using pre-trained models. Training the model with your own data will yield more successful results.

    "},{"location":"how-to-guides/others/lane-detection-methods/#lane-detection-methods_1","title":"Lane Detection Methods","text":""},{"location":"how-to-guides/others/lane-detection-methods/#clrernet","title":"CLRerNet","text":"

    This work introduce LaneIoU, which improves confidence score accuracy by considering local lane angles, and CLRerNet, a novel detector leveraging LaneIoU.

    • Paper: CLRerNet: Improving Confidence of Lane Detection with LaneIoU
    • Code: GitHub
    Method Backbone Dataset Confidence Campus Video Road Video CLRerNet dla34 culane 0.4 CLRerNet dla34 culane 0.1 CLRerNet dla34 culane 0.01"},{"location":"how-to-guides/others/lane-detection-methods/#clrnet","title":"CLRNet","text":"

    This work introduce Cross Layer Refinement Network (CLRNet) to fully utilize high-level semantic and low-level detailed features in lane detection. CLRNet detects lanes with high-level features and refines them with low-level details. Additionally, ROIGather technique and Line IoU loss significantly enhance localization accuracy, outperforming state-of-the-art methods.

    • Paper: CLRNet: Cross Layer Refinement Network for Lane Detection
    • Code: GitHub
    Method Backbone Dataset Confidence Campus Video Road Video CLRNet dla34 culane 0.2 CLRNet dla34 culane 0.1 CLRNet dla34 culane 0.01 CLRNet dla34 llamas 0.4 CLRNet dla34 llamas 0.2 CLRNet dla34 llamas 0.1 CLRNet resnet18 llamas 0.4 CLRNet resnet18 llamas 0.2 CLRNet resnet18 llamas 0.1 CLRNet resnet18 tusimple 0.2 CLRNet resnet18 tusimple 0.1 CLRNet resnet34 culane 0.1 CLRNet resnet34 culane 0.05 CLRNet resnet101 culane 0.2 CLRNet resnet101 culane 0.1"},{"location":"how-to-guides/others/lane-detection-methods/#fenet","title":"FENet","text":"

    This research introduces Focusing Sampling, Partial Field of View Evaluation, Enhanced FPN architecture, and Directional IoU Loss, addressing challenges in precise lane detection for autonomous driving. Experiments show that Focusing Sampling, which emphasizes distant details crucial for safety, significantly improves both benchmark and practical curved/distant lane recognition accuracy over uniform approaches.

    • Paper: FENet: Focusing Enhanced Network for Lane Detection
    • Code: GitHub
    Method Backbone Dataset Confidence Campus Video Road Video FENet v1 dla34 culane 0.2 FENet v1 dla34 culane 0.1 FENet v1 dla34 culane 0.05 FENet v2 dla34 culane 0.2 FENet v2 dla34 culane 0.1 FENet v2 dla34 culane 0.05 FENet v2 dla34 llamas 0.4 FENet v2 dla34 llamas 0.2 FENet v2 dla34 llamas 0.1 FENet v2 dla34 llamas 0.05"},{"location":"how-to-guides/others/lane-detection-methods/#multitask-detection-methods","title":"Multitask Detection Methods","text":""},{"location":"how-to-guides/others/lane-detection-methods/#yolopv2","title":"YOLOPv2","text":"

    This work proposes an efficient multi-task learning network for autonomous driving, combining traffic object detection, drivable road area segmentation, and lane detection. YOLOPv2 model achieves new state-of-the-art performance in accuracy and speed on the BDD100K dataset, halving the inference time compared to previous benchmarks.

    • Paper: YOLOPv2: Better, Faster, Stronger for Panoptic Driving Perception
    • Code: GitHub
    Method Campus Video Road Video YOLOPv2"},{"location":"how-to-guides/others/lane-detection-methods/#hybridnets","title":"HybridNets","text":"

    This work introduces HybridNets, an end-to-end perception network for autonomous driving. It optimizes segmentation heads and box/class prediction networks using a weighted bidirectional feature network. HybridNets achieves good performance on BDD100K and Berkeley DeepDrive datasets, outperforming state-of-the-art methods.

    • Paper: HybridNets: End-to-End Perception Network
    • Code: GitHub
    Method Campus Video Road Video HybridNets"},{"location":"how-to-guides/others/lane-detection-methods/#twinlitenet","title":"TwinLiteNet","text":"

    This work introduces TwinLiteNet, a lightweight model designed for driveable area and lane line segmentation in autonomous driving.

    • Paper : TwinLiteNet: An Efficient and Lightweight Model for Driveable Area and Lane Segmentation in Self-Driving Cars
    • Code: GitHub
    Method Campus Video Road Video Twinlitenet"},{"location":"how-to-guides/others/lane-detection-methods/#citation","title":"Citation","text":"
    @article{honda2023clrernet,\ntitle={CLRerNet: Improving Confidence of Lane Detection with LaneIoU},\nauthor={Hiroto Honda and Yusuke Uchida},\njournal={arXiv preprint arXiv:2305.08366},\nyear={2023},\n}\n
    @InProceedings{Zheng_2022_CVPR,\nauthor    = {Zheng, Tu and Huang, Yifei and Liu, Yang and Tang, Wenjian and Yang, Zheng and Cai, Deng and He, Xiaofei},\ntitle     = {CLRNet: Cross Layer Refinement Network for Lane Detection},\nbooktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},\nmonth     = {June},\nyear      = {2022},\npages     = {898-907}\n}\n
    @article{wang&zhong_2024fenet,\ntitle={FENet: Focusing Enhanced Network for Lane Detection},\nauthor={Liman Wang and Hanyang Zhong},\nyear={2024},\neprint={2312.17163},\narchivePrefix={arXiv},\nprimaryClass={cs.CV}\n}\n
    @misc{vu2022hybridnets,\ntitle={HybridNets: End-to-End Perception Network},\nauthor={Dat Vu and Bao Ngo and Hung Phan},\nyear={2022},\neprint={2203.09035},\narchivePrefix={arXiv},\nprimaryClass={cs.CV}\n}\n
    @INPROCEEDINGS{10288646,\nauthor={Che, Quang-Huy and Nguyen, Dinh-Phuc and Pham, Minh-Quan and Lam, Duc-Khai},\nbooktitle={2023 International Conference on Multimedia Analysis and Pattern Recognition (MAPR)},\ntitle={TwinLiteNet: An Efficient and Lightweight Model for Driveable Area and Lane Segmentation in Self-Driving Cars},\nyear={2023},\nvolume={},\nnumber={},\npages={1-6},\ndoi={10.1109/MAPR59823.2023.10288646}\n}\n
    "},{"location":"how-to-guides/others/planning-evaluation-using-scenarios/","title":"Planning Evaluation Using Scenarios","text":""},{"location":"how-to-guides/others/planning-evaluation-using-scenarios/#planning-evaluation-using-scenarios","title":"Planning Evaluation Using Scenarios","text":""},{"location":"how-to-guides/others/planning-evaluation-using-scenarios/#introduction","title":"Introduction","text":"

    The performance of Autoware's planning stack is evaluated through a series of defined scenarios that outline the behaviors of other road users and specify the success and failure conditions for the autonomous vehicle, referred to as \"Ego.\" These scenarios are specified in a machine-readable format and executed using the Scenario Simulator tool. Below, brief definitions of the three primary tools used to create, execute, and analyze these scenarios are provided.

    • Scenario Editor: A web-based GUI tool used to create and edit scenario files easily.
    • Scenario Simulator: A tool designed to simulate these machine-readable scenarios. It configures the environment, dictates the behavior of other road users, and sets both success and failure conditions.
    • Autoware Evaluator: A web-based CI/CD platform that facilitates the uploading, categorization, and execution of scenarios along with their corresponding map files. It allows for parallel scenario executions and provides detailed results.
    "},{"location":"how-to-guides/others/planning-evaluation-using-scenarios/#creating-scenarios","title":"Creating Scenarios","text":"

    Scenarios are created using the Scenario Editor tool, which provides a user-friendly interface to define road users, their behaviors, and the success and failure conditions for the ego vehicle.

    To demonstrate the scenario creation process, we will create a simple scenario.

    "},{"location":"how-to-guides/others/planning-evaluation-using-scenarios/#scenario-definition","title":"Scenario Definition","text":"
    • Initial condition: The ego vehicle is driving in a straight lane, and a bicycle is stopped in the same lane.
    • Action: When the longitudinal distance between the ego vehicle and the bicycle is less than 10 meters, the bicycle starts moving at a speed of 2 m/s.
    • Success condition: The ego vehicle reaches the goal point.
    • Failure condition: The ego vehicle collides with the bicycle.
    "},{"location":"how-to-guides/others/planning-evaluation-using-scenarios/#scenario-creation","title":"Scenario Creation","text":"
    1. Open the Scenario Editor tool.
    2. Load the map from the MAP tab. For this tutorial, we will use the LEO-VM-00001 map, which can be downloaded from the Autoware Evaluators' Maps section.
    3. In the Edit tab, add both the ego vehicle and the bicycle from the Add Entity section. After adding them, select the ego vehicle and set its destination from the Edit tab.

    4. After setting the positions of the bicycle, ego vehicle, and ego vehicle's destination, set the initial velocity of the bicycle to 0 m/s as shown below. Then, click the Scenario tab and click on the Add new act button. Using this, define an action that starts the bicycle moving when the distance condition is satisfied.

    5. As shown below, define a start condition for an action named act_start_bicycle. This condition checks the distance between the ego vehicle and the bicycle. If the distance is less than or equal to 10 meters, the action is triggered.

    6. After the start condition, define an event for this action. Since there are no other conditions for this event, set a dummy condition where SimulationTime is greater than 0, as shown below.

    7. Define an action for this event. This action will set the velocity of the actor, Bicycle0, to 2 m/s.

    8. The scenario is now ready to be tested. You can export the scenario from the Scenario tab by clicking the Export button.

    "},{"location":"how-to-guides/others/planning-evaluation-using-scenarios/#important-notes-for-scenario-creation","title":"Important Notes for Scenario Creation","text":"
    • The tutorial we made is a simple scenario implementation for more complex definitions and other road user actions, please refer to the Scenario Simulator documentation.
    • Due to the complexity of the scenario format, support for editing in the GUI is not yet complete for all scenario features. Therefore, if there are any items that cannot be supported or are difficult to edit using the GUI, you can edit and adjust the scenario directly using a text editor. In that case, the schema of the scenario format is checked before saving, so you can edit with confidence.
    • Best way to understand other complex scenario implementations is to look at the existing scenarios. You can check the scenarios in the Autoware Evaluator to see other implementations.
    "},{"location":"how-to-guides/others/planning-evaluation-using-scenarios/#running-scenarios-using-scenario-simulator","title":"Running Scenarios using Scenario Simulator","text":""},{"location":"how-to-guides/others/planning-evaluation-using-scenarios/#creating-the-scenario-simulator-environment","title":"Creating the Scenario Simulator Environment","text":"
    • Clone the Autoware main repository.
    git clone https://github.com/autowarefoundation/autoware.git\n
    • Create source directory.
    cd autoware\nmkdir src\n
    • Import the necessary repositories both for the Autoware and Scenario Simulator.
    vcs import src < autoware.repos\nvcs import src < simulator.repos\n
    • If you are installing Autoware for the first time, you can automatically install the dependencies by using the provided Ansible script. Please refer to the Autoware source installation page for more information.
    ./setup-dev-env.sh\n
    • Install dependent ROS packages.

    Autoware and Scenario Simulator require some ROS 2 packages in addition to the core components. The tool rosdep allows an automatic search and installation of such dependencies. You might need to run rosdep update before rosdep install.

    source /opt/ros/humble/setup.bash\nrosdep install -y --from-paths src --ignore-src --rosdistro $ROS_DISTRO\n
    • Build the workspace.
    colcon build --symlink-install --cmake-args -DCMAKE_BUILD_TYPE=Release\n
    • If you have any issues with the installation, or if you need more detailed instructions, please refer to the Autoware source installation page.
    "},{"location":"how-to-guides/others/planning-evaluation-using-scenarios/#execute-the-scenario-by-using-scenario-simulator","title":"Execute the scenario by using Scenario Simulator","text":"

    We created an example scenario above and exported it into our local machine. Now, we will execute this scenario using Scenario Simulator.

    • Firstly, we should define the lanelet2_map.osm and pointcloud_map.pcd files in the scenario file. In default, the configuration is below:
    RoadNetwork:\nLogicFile:\nfilepath: lanelet2_map.osm\nSceneGraphFile:\nfilepath: lanelet2_map.pcd\n

    User should change the filepath field according to the map file we downloaded before. You can find both lanelet2_map.osm and pointcloud_map.pcd files in the Maps section of the Autoware Evaluator.

    Note: Although the pointcloud_map.pcd file is mandatory, it is not used in the simulation. Therefore, you can define a dummy file for this field.

    • After defining the map files, we can execute the scenario by using the Scenario Simulator. The command is below:
    ros2 launch scenario_test_runner scenario_test_runner.launch.py \\\nrecord:=false \\\nscenario:='/path/to/scenario/sample.yaml' \\\nsensor_model:=sample_sensor_kit \\\nvehicle_model:=sample_vehicle\n
    • Now, the scenario will be executed in the Scenario Simulator. You can see the simulation in the RViz window.
    • To see the condition details in the scenario while playing it in RViz, you should enable the Condition Group marker, it is not enabled by default.

    • By using the Condition Group marker, you can see the conditions that are satisfied and not satisfied in the scenario. So it would be helpful to investigate the scenario.

    "},{"location":"how-to-guides/others/planning-evaluation-using-scenarios/#understanding-the-autoware-evaluator","title":"Understanding the Autoware Evaluator","text":"

    Autoware Evaluator is a web-based CI/CD platform that facilitates the uploading, categorization, and execution of scenarios along with their corresponding map files. It allows for parallel scenario executions and provides detailed results.

    Note: The Autoware Evaluator is a private tool. Currently, you should be invited to the AWF project to access it. However, the AWF project will be public, it is under development.

    • Firstly, you should create a free TIER IV account to access the Autoware Evaluator. You can create an account by using the TIER IV account creation page.
    • After you created an account, please reach out to Hiroshi IGATA (hiroshi.igata@tier4.jp), the AWF ODD WG leader, for an invitation to AWF project of the Autoware Evaluator. It is advised to join the ODD WG meeting once to briefly introduce yourself and your interest. The ODD WG information is here.

    Let's explore the Autoware Evaluator interface page by page.

    "},{"location":"how-to-guides/others/planning-evaluation-using-scenarios/#scenarios","title":"Scenarios","text":"

    In Scenarios tab, user can view, search, and filter all the uploaded scenarios. If you are looking for a specific scenario, you can use this page's search bar to find it. If the scenario labeled ticked, it means that the scenario is reviewed and approved by manager, otherwise, it waits for a review.

    "},{"location":"how-to-guides/others/planning-evaluation-using-scenarios/#catalogs","title":"Catalogs","text":"

    The Catalogs tab displays scenario catalogs which means a group of suites that were created for a specific use case. In Evaluator, user can execute a test for all scenarios in a catalog.

    "},{"location":"how-to-guides/others/planning-evaluation-using-scenarios/#suites","title":"Suites","text":"

    Suites mean a group of scenarios that were created for a specific testing purposes, it is the smallest unit of the scenario group. It can be assigned to any catalog to be tested for a use case.

    "},{"location":"how-to-guides/others/planning-evaluation-using-scenarios/#maps","title":"Maps","text":"

    The Maps tab displays a list of maps that are used in the scenarios. Developers can find the map that they need to execute the scenario.

    "},{"location":"how-to-guides/others/planning-evaluation-using-scenarios/#reports","title":"Reports","text":"

    The Reports tab displays a list of results from scenario tests. Here, users can view the most recent test executions.

    The screenshot above illustrates the latest test execution for the public road bus catalog. In the source column, the default Autoware repository with the latest commit is displayed. For a deeper analysis, users can click on the title to view comprehensive results.

    Clicking on the title of a test execution leads to a page displaying build logs and individual test results for each suite. By selecting links in the ID column, users can access detailed results for each suits.

    After clicking the ID link, users can view detailed test results for the scenarios within the suite. For instance, in our case, 183 out of 198 scenarios succeeded. This page is a useful resource for developers to further investigate scenarios that failed.

    • ID links to a page which shows detailed test results for the scenario.
    • Scenario Name links to a page displaying the scenario's definition.
    • Labels indicate the conditions under which the scenario failed; if successful, this field remains empty.
    • Status shows the count of failed and succeeded cases within a scenario.

    To investigate deeper for this specific scenario, we should click into ID link.

    For this scenario, three of the four cases failed. Each case represents changing the ego_speed parameter. To deep dive into the failed cases, click on the ID link.

    This page show us the detailed information of the failed case. We can understand why the case failing by looking into the Message. For our case, the message is Simulation failure: CustomCommandAction typed \"exitFailure\" was triggered by the anonymous Condition (OpenSCENARIO.Storyboard.Story[0].Act[\"_EndCondition\"].ManeuverGroup[0].Maneuver[0].Event[1].StartTrigger.ConditionGroup[1].Condition[0]): The state of StoryboardElement \"act_ego_nostop_check\" (= completeState) is given state completeState?

    From this message, we can understand that an action which is used for checking the vehicle is not stopped is in complete state. And, the scenario author set this condition as a failure condition. Therefore, the scenario failed.

    • To test the scenario in local, you can download the scenario and its map by using links which are marked image above. Running scenario on local machine was explained in the previous section.
    • To compare the test executions to analyze when the scenarios were successfully running, you can use Compare Reports button in the page which shows result of previous test executions.

    • To see how the scenario failed, you can replay the scenario by clicking the Play button. It shows the executed test.

    • a) The bar that assists us in rewinding or fast-forwarding
    • b) Shows the success and failing conditions are satisfied or not
    • c) Shows the state of the actions and events and their start conditions
    "},{"location":"how-to-guides/others/planning-evaluation-using-scenarios/#for-further-support","title":"For Further Support","text":"

    Creating scenarios, analyzing them using Autoware Evaluator would be complicated for new users. If you need further help or support, you can reach out to the Autoware community through the Autoware Discord server, and you can ask questions about scenario creation process or Autoware Evaluator in odd channel. Also, you can attend the weekly ODD Working Group meeting to get support from the community. You can join the ODD WG invitation group to get the meeting invitations.

    • ODD WG meetings are held weekly in the single time slot which is 2:00pm, Monday (UTC).
    "},{"location":"how-to-guides/others/planning-evaluation-using-scenarios/#other-useful-pages","title":"Other Useful Pages","text":"
    • https://docs.web.auto/en/user-manuals/
    • https://tier4.github.io/scenario_simulator_v2-docs/
    "},{"location":"how-to-guides/others/reducing-start-delays/","title":"Reducing start delays on real vehicles","text":""},{"location":"how-to-guides/others/reducing-start-delays/#reducing-start-delays-on-real-vehicles","title":"Reducing start delays on real vehicles","text":"

    In simulation, the ego vehicle reacts nearly instantly to the control commands generated by Autoware. However, with a real vehicle, some delays occur that may make ego feel less responsive.

    This page presents start delays experienced when using Autoware on a real vehicle. We define the start delay as the time between (a) when Autoware decides to make the ego vehicle start and (b) when the vehicle actually starts moving. More precisely:

    • (a) is the time when the speed or acceleration command output by Autoware switches to a non-zero value.
    • (b) is the time when the measured velocity of the ego vehicle switches to a positive value.
    "},{"location":"how-to-guides/others/reducing-start-delays/#start-delay-with-manual-driving","title":"Start delay with manual driving","text":"

    First, let us look at the start delay when a human is driving.

    The following figure shows the start delay when a human driver switches the gear from parked to drive and instantly releases the brake to push the throttle pedal and make the velocity of the vehicle increase.

    There are multiple things to note from this figure.

    • Brake (red): despite the driver instantly releasing the brake pedal, we see that the measured brake takes around 150ms to go from 100% to 0%.
    • Gear (orange): the driver switches gear before releasing the brake pedal, but the gear is measured to switch after the brake is released.
    • Throttle (green) and velocity (blue): the driver pushes the throttle pedal and the vehicle is measured to start moving around 500ms later.
    "},{"location":"how-to-guides/others/reducing-start-delays/#filter-delay","title":"Filter delay","text":"

    To guarantee passenger comfort, some Autoware modules implement filters on the jerk of the vehicle, preventing sudden changes in acceleration.

    For example, the vehicle_cmd_gate filters the acceleration command generated by the controller and was previously introducing significant delays when transitioning between a stop command where the acceleration is negative, and a move command where the acceleration is positive. Because of the jerk filter, the transition between negative and positive was not instantaneous and would take several hundreds of milliseconds.

    "},{"location":"how-to-guides/others/reducing-start-delays/#gear-delay","title":"Gear delay","text":"

    In many vehicles, it is necessary to change gear before first starting to move the vehicle. When performed autonomously, this gear change can take some significant time. Moreover, as seen from the data recorded with manual driving, the measured gear value may be delayed.

    In Autoware, the controller sends a stopping control command until the gear is changed to the drive state. This means that delays in the gear change and its reported value can greatly impact the start delay. Note that this is only an issue when the vehicle is initially in the parked gear.

    The only way to reduce this delay is by tuning the vehicle to increase the gear change speed or to reduce the delay in the gear change report.

    "},{"location":"how-to-guides/others/reducing-start-delays/#brake-delay","title":"Brake delay","text":"

    In vehicles with a brake pedal, the braking system will often be made of several moving parts which cannot move instantly. Thus, when Autoware sends brake commands to a vehicle, some delays should be expected in the actual brake applied to the wheels.

    This lingering brake may prevent or delay the initial motion of the ego vehicle.

    This delay can be reduced by tuning the vehicle.

    "},{"location":"how-to-guides/others/reducing-start-delays/#throttle-response","title":"Throttle response","text":"

    For vehicles with throttle control, one of the main cause of start delays is due to the throttle response of the vehicle. When pushing the throttle pedal, the wheels of the vehicle do not instantly start rotating. This is partly due to the inertia of the vehicle, but also to the motor which may take a significant time to start applying some torque to the wheels.

    It may be possible to tune some vehicle side parameters to reduce this delay, but it is often done at the cost of reduced energy efficiency.

    On the Autoware side, the only way to decrease this delay is to increase the initial throttle but this can cause uncomfortably high initial accelerations.

    "},{"location":"how-to-guides/others/reducing-start-delays/#initial-acceleration-and-throttle","title":"Initial acceleration and throttle","text":"

    As we just discussed, for vehicles with throttle control, an increased initial throttle value can reduce the start delay.

    Since Autoware outputs an acceleration value, the conversion module raw_vehicle_cmd_converter is used to map the acceleration value from Autoware to a throttle value to be sent to the vehicle. Such mapping is usually calibrated automatically using the accel_brake_map_calibrator module, but it may produce a low initial throttle which leads to high start delays.

    In order to increase the initial throttle, there are two options: increase the initial acceleration output by Autoware, or modify the acceleration to throttle mapping.

    The initial acceleration output by Autoware can be tuned in the motion_velocity_smoother with parameters engage_velocity and engage_acceleration. However, the vehicle_cmd_gate applies a filter on the control command to prevent too sudden changes in jerk and acceleration, limiting the maximum allowed acceleration while the ego vehicle is stopped.

    Alternatively, the mapping of acceleration can be tuned to increase the throttle corresponding to the initial acceleration. If we look at an example acceleration map, it does the following conversion: when the ego velocity is 0 (first column), acceleration values between 0.631 (first row) and 0.836 (second row) are converted to a throttle between 0% and 10%. This means that any initial acceleration bellow 0.631m/s\u00b2 will not produce any throttle. Keep in mind that after tuning the acceleration map, it may be necessary to also update the brake map.

    default 0 1.39 2.78 4.17 5.56 6.94 8.33 9.72 11.11 12.5 13.89 0 0.631 0.11 -0.04 -0.04 -0.041 -0.096 -0.137 -0.178 -0.234 -0.322 -0.456 0.1 0.836 0.57 0.379 0.17 0.08 0.07 0.068 0.027 -0.03 -0.117 -0.251 0.2 1.129 0.863 0.672 0.542 0.4 0.38 0.361 0.32 0.263 0.176 0.042 0.3 1.559 1.293 1.102 0.972 0.887 0.832 0.791 0.75 0.694 0.606 0.472 0.4 2.176 1.909 1.718 1.588 1.503 1.448 1.408 1.367 1.31 1.222 1.089 0.5 3.027 2.76 2.57 2.439 2.354 2.299 2.259 2.218 2.161 2.074 1.94"},{"location":"how-to-guides/others/running-autoware-without-cuda/","title":"Running Autoware without CUDA","text":""},{"location":"how-to-guides/others/running-autoware-without-cuda/#running-autoware-without-cuda","title":"Running Autoware without CUDA","text":"

    Although CUDA installation is recommended to achieve better performance for object detection and traffic light recognition in Autoware Universe, it is possible to run these algorithms without CUDA. The following subsections briefly explain how to run each algorithm in such an environment.

    "},{"location":"how-to-guides/others/running-autoware-without-cuda/#running-2d3d-object-detection-without-cuda","title":"Running 2D/3D object detection without CUDA","text":"

    Autoware Universe's object detection can be run using one of five possible configurations:

    • lidar_centerpoint
    • lidar_apollo_instance_segmentation
    • lidar-apollo + tensorrt_yolo
    • lidar-centerpoint + tensorrt_yolo
    • euclidean_cluster

    Of these five configurations, only the last one (euclidean_cluster) can be run without CUDA. For more details, refer to the euclidean_cluster module's README file.

    "},{"location":"how-to-guides/others/running-autoware-without-cuda/#running-traffic-light-detection-without-cuda","title":"Running traffic light detection without CUDA","text":"

    For traffic light recognition (both detection and classification), there are two modules that require CUDA:

    • traffic_light_ssd_fine_detector
    • traffic_light_classifier

    To run traffic light detection without CUDA, set enable_fine_detection to false in the traffic light launch file. Doing so disables the traffic_light_ssd_fine_detector such that traffic light detection is handled by the map_based_traffic_light_detector module instead.

    To run traffic light classification without CUDA, set use_gpu to false in the traffic light classifier launch file. Doing so will force the traffic_light_classifier to use a different classification algorithm that does not require CUDA or a GPU.

    "},{"location":"how-to-guides/others/using-divided-map/","title":"Using divided pointcloud map","text":""},{"location":"how-to-guides/others/using-divided-map/#using-divided-pointcloud-map","title":"Using divided pointcloud map","text":"

    Divided pointcloud map is necessary when handling large pointcloud map, in which case Autoware may not be capable of sending the whole map via ROS 2 topic or loading the whole map into memory. By using the pre-divided map, Autoware will dynamically load the pointcloud map according to the vehicle's position.

    "},{"location":"how-to-guides/others/using-divided-map/#tutorial","title":"Tutorial","text":"

    Download the sample-map-rosbag_split and locate the map under $HOME/autoware_map/.

    gdown -O ~/autoware_map/ 'https://docs.google.com/uc?export=download&id=11tLC9T4MS8fnZ9Wo0D8-Ext7hEDl2YJ4'\nunzip -d ~/autoware_map/ ~/autoware_map/sample-rosbag_split.zip\n

    Then, you may launch logging_simulator with the following command to load the divided map. Note that you need to specify the map_path and pointcloud_map_file arguments.

    source ~/autoware/install/setup.bash\nros2 launch autoware_launch logging_simulator.launch.xml \\\nmap_path:=$HOME/autoware_map/sample-map-rosbag pointcloud_map_file:=pointcloud_map \\\nvehicle_model:=sample_vehicle_split sensor_model:=sample_sensor_kit\n

    For playing rosbag to simulate Autoware, please refer to the instruction in the tutorial for rosbag replay simulation.

    "},{"location":"how-to-guides/others/using-divided-map/#related-links","title":"Related links","text":"
    • For specific format definition of the divided map, please refer to Map component design page
    • The Readme of map_loader may be useful specific instructions for dividing maps
    • When dividing your own pointcloud map, you may use pointcloud_divider, which can divide the map as well as generating the compatible metadata
    "},{"location":"how-to-guides/training-machine-learning-models/training-models/","title":"Training and Deploying Models","text":""},{"location":"how-to-guides/training-machine-learning-models/training-models/#training-and-deploying-models","title":"Training and Deploying Models","text":""},{"location":"how-to-guides/training-machine-learning-models/training-models/#overview","title":"Overview","text":"

    The Autoware offers a comprehensive array of machine learning models, tailored for a wide range of tasks including 2D and 3D object detection, traffic light recognition and more. These models have been meticulously trained utilizing open-mmlab's extensive repositories. By leveraging the provided scripts and following the training steps, you have the capability to train these models using your own dataset, tailoring them to your specific needs.

    Furthermore, you will find the essential conversion scripts to deploy your trained models into Autoware using the mmdeploy repository.

    "},{"location":"how-to-guides/training-machine-learning-models/training-models/#training-traffic-light-classifier-model","title":"Training traffic light classifier model","text":"

    The traffic light classifier model within the Autoware has been trained using the mmlab/pretrained repository. The Autoware offers pretrained models based on EfficientNet-b1 and MobileNet-v2 architectures. To fine-tune these models, a total of 83,400 images were employed, comprising 58,600 for training, 14,800 for evaluation, and 10,000 for testing. These images represent Japanese traffic lights and were trained using TIER IV's internal dataset.

    Name Input Size Test Accuracy EfficientNet-b1 128 x 128 99.76% MobileNet-v2 224 x 224 99.81%

    Comprehensive training instructions for the traffic light classifier model are detailed within the readme file accompanying \"traffic_light_classifier\" package. These instructions will guide you through the process of training the model using your own dataset. To facilitate your training, we have also provided an example dataset containing three distinct classes (green, yellow, red), which you can leverage during the training process.

    Detailed instructions for training the traffic light classifier model can be found here.

    "},{"location":"how-to-guides/training-machine-learning-models/training-models/#training-centerpoint-3d-object-detection-model","title":"Training CenterPoint 3D object detection model","text":"

    The CenterPoint 3D object detection model within the Autoware has been trained using the autowarefoundation/mmdetection3d repository.

    To train custom CenterPoint models and convert them into ONNX format for deployment in Autoware, please refer to the instructions provided in the README file included with Autoware's lidar_centerpoint package. These instructions will provide a step-by-step guide for training the CenterPoint model.

    In order to assist you with your training process, we have also included an example dataset in the TIER IV dataset format.

    This dataset contains 600 lidar frames and covers 5 classes, including 6905 cars, 3951 pedestrians, 75 cyclists, 162 buses, and 326 trucks.

    You can utilize this example dataset to facilitate your training efforts.

    "},{"location":"installation/","title":"Installation","text":""},{"location":"installation/#installation","title":"Installation","text":""},{"location":"installation/#target-platforms","title":"Target platforms","text":"

    Autoware targets the platforms listed below. It may change in future versions of Autoware.

    The Autoware Foundation provides no support on other platforms than those listed below.

    "},{"location":"installation/#architecture","title":"Architecture","text":"
    • amd64
    • arm64
    "},{"location":"installation/#minimum-hardware-requirements","title":"Minimum hardware requirements","text":"

    Info

    Autoware is scalable and can be customized to work with distributed or less powerful hardware. The minimum hardware requirements given below are just a general recommendation. However, performance will be improved with more cores, RAM and a higher-spec graphics card or GPU core.

    Although GPU is not required to run basic functionality, it is mandatory to enable the following neural network related functions: - LiDAR based object detection - Camera based object detection - Traffic light detection and classification

    • CPU with 8 cores
    • 16GB RAM
    • [Optional] NVIDIA GPU (4GB RAM)

    For details of how to enable object detection and traffic light detection/classification without a GPU, refer to the Running Autoware without CUDA.

    "},{"location":"installation/#installing-autoware","title":"Installing Autoware","text":"

    There are two ways to set up Autoware. Choose one according to your preference.

    If any issues occur during installation, refer to the Support page.

    "},{"location":"installation/#1-docker-installation","title":"1. Docker installation","text":"

    Autoware's Open AD Kit containers enables you to run Autoware easily on your host machine ensuring same environment for all deployments without installing any dependencies. Full Guide on Docker Installation Setup.

    Open AD Kit is also the First SOAFEE Blueprint for autonomous driving that offers extensible modular containers for making it easier to run Autoware's AD stack on distributed systems. Full Guide on Open AD Kit Setup.

    Developer containers are also available for developers making it easier to build Autoware from source and ensuring same environment for all developers.

    "},{"location":"installation/#2-source-installation","title":"2. Source installation","text":"

    Source installation is for the cases where more granular control of the installation environment is needed. It is recommended for experienced users or people who want to customize their environment. Note that some problems may occur depending on your local environment.

    For more information, refer to the source installation guide.

    "},{"location":"installation/#installing-related-tools","title":"Installing related tools","text":"

    Some other tools are required depending on the evaluation you want to do. For example, to run an end-to-end simulation you need to install an appropriate simulator.

    For more information, see here.

    "},{"location":"installation/#additional-settings-for-developers","title":"Additional settings for developers","text":"

    There are also tools and settings for developers, such as Shells or IDEs.

    For more information, see here.

    "},{"location":"installation/additional-settings-for-developers/","title":"Additional settings for developers","text":""},{"location":"installation/additional-settings-for-developers/#additional-settings-for-developers","title":"Additional settings for developers","text":"
    • Console settings for ROS 2
    • Network configuration for ROS 2
    "},{"location":"installation/additional-settings-for-developers/console-settings/","title":"Console settings for ROS 2","text":""},{"location":"installation/additional-settings-for-developers/console-settings/#console-settings-for-ros-2","title":"Console settings for ROS 2","text":""},{"location":"installation/additional-settings-for-developers/console-settings/#colorizing-logger-output","title":"Colorizing logger output","text":"

    By default, ROS 2 logger doesn't colorize the output. To colorize it, add the following to your ~/.bashrc:

    export RCUTILS_COLORIZED_OUTPUT=1\n
    "},{"location":"installation/additional-settings-for-developers/console-settings/#customizing-the-format-of-logger-output","title":"Customizing the format of logger output","text":"

    By default, ROS 2 logger doesn't output detailed information such as file name, function name, or line number. To customize it, add the following to your ~/.bashrc:

    export RCUTILS_CONSOLE_OUTPUT_FORMAT=\"[{severity} {time}] [{name}]: {message} ({function_name}() at {file_name}:{line_number})\"\n

    For more options, see here.

    "},{"location":"installation/additional-settings-for-developers/console-settings/#colorized-googletest-output","title":"Colorized GoogleTest output","text":"

    Add export GTEST_COLOR=1 to your ~/.bashrc.

    For more details, refer to Advanced GoogleTest Topics: Colored Terminal Output.

    This is useful when running tests with colcon test.

    "},{"location":"installation/additional-settings-for-developers/network-configuration/","title":"Network settings for ROS 2 and Autoware","text":""},{"location":"installation/additional-settings-for-developers/network-configuration/#network-settings-for-ros-2-and-autoware","title":"Network settings for ROS 2 and Autoware","text":""},{"location":"installation/additional-settings-for-developers/network-configuration/#single-computer-setup","title":"Single computer setup","text":"
    • DDS settings for ROS 2 and Autoware
    "},{"location":"installation/additional-settings-for-developers/network-configuration/#multi-computer-setup","title":"Multi computer setup","text":"
    • Communicating across multiple computers with CycloneDDS
    "},{"location":"installation/additional-settings-for-developers/network-configuration/dds-settings/","title":"DDS settings for ROS 2 and Autoware","text":""},{"location":"installation/additional-settings-for-developers/network-configuration/dds-settings/#dds-settings-for-ros-2-and-autoware","title":"DDS settings for ROS 2 and Autoware","text":""},{"location":"installation/additional-settings-for-developers/network-configuration/dds-settings/#enable-localhost-only-communication","title":"Enable localhost-only communication","text":"
    1. Enable multicast for lo
    2. Make sure export ROS_LOCALHOST_ONLY=1 is NOT present in .bashrc.
      • See About ROS_LOCALHOST_ONLY environment variable for more information.
    "},{"location":"installation/additional-settings-for-developers/network-configuration/dds-settings/#tune-dds-settings","title":"Tune DDS settings","text":"

    Autoware uses DDS for internode communication. ROS 2 documentation recommends users to tune DDS to utilize its capability.

    Note

    CycloneDDS is the recommended and most tested DDS implementation for Autoware.

    Warning

    If you don't tune these settings, Autoware will fail to receive large data like point clouds or images.

    "},{"location":"installation/additional-settings-for-developers/network-configuration/dds-settings/#tune-system-wide-network-settings","title":"Tune system-wide network settings","text":"

    Set the config file path and enlarge the Linux kernel maximum buffer size before launching Autoware.

    # Increase the maximum receive buffer size for network packets\nsudo sysctl -w net.core.rmem_max=2147483647  # 2 GiB, default is 208 KiB\n\n# IP fragmentation settings\nsudo sysctl -w net.ipv4.ipfrag_time=3  # in seconds, default is 30 s\nsudo sysctl -w net.ipv4.ipfrag_high_thresh=134217728  # 128 MiB, default is 256 KiB\n

    To make it permanent,

    sudo nano /etc/sysctl.d/10-cyclone-max.conf\n

    Paste the following into the file:

    # Increase the maximum receive buffer size for network packets\nnet.core.rmem_max=2147483647  # 2 GiB, default is 208 KiB\n\n# IP fragmentation settings\nnet.ipv4.ipfrag_time=3  # in seconds, default is 30 s\nnet.ipv4.ipfrag_high_thresh=134217728  # 128 MiB, default is 256 KiB\n

    Details of each parameter here is explained in the ROS 2 documentation.

    "},{"location":"installation/additional-settings-for-developers/network-configuration/dds-settings/#validate-the-sysctl-settings","title":"Validate the sysctl settings","text":"
    user@pc$ sysctl net.core.rmem_max net.ipv4.ipfrag_time net.ipv4.ipfrag_high_thresh\nnet.core.rmem_max = 2147483647\nnet.ipv4.ipfrag_time = 3\nnet.ipv4.ipfrag_high_thresh = 134217728\n
    "},{"location":"installation/additional-settings-for-developers/network-configuration/dds-settings/#cyclonedds-configuration","title":"CycloneDDS Configuration","text":"

    Save the following file as ~/cyclonedds.xml.

    <?xml version=\"1.0\" encoding=\"UTF-8\" ?>\n<CycloneDDS xmlns=\"https://cdds.io/config\" xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\" xsi:schemaLocation=\"https://cdds.io/config https://raw.githubusercontent.com/eclipse-cyclonedds/cyclonedds/master/etc/cyclonedds.xsd\">\n<Domain Id=\"any\">\n<General>\n<Interfaces>\n<NetworkInterface autodetermine=\"false\" name=\"lo\" priority=\"default\" multicast=\"default\" />\n</Interfaces>\n<AllowMulticast>default</AllowMulticast>\n<MaxMessageSize>65500B</MaxMessageSize>\n</General>\n<Internal>\n<SocketReceiveBufferSize min=\"10MB\"/>\n<Watermarks>\n<WhcHigh>500kB</WhcHigh>\n</Watermarks>\n</Internal>\n</Domain>\n</CycloneDDS>\n

    Then add the following lines to your ~/.bashrc file.

    export RMW_IMPLEMENTATION=rmw_cyclonedds_cpp\n\nexport CYCLONEDDS_URI=file:///absolute/path/to/cyclonedds.xml\n# Replace `/absolute/path/to/cyclonedds.xml` with the actual path to the file.\n# Example: export CYCLONEDDS_URI=file:///home/user/cyclonedds.xml\n

    You can refer to Eclipse Cyclone DDS: Run-time configuration documentation for more details.

    Warning

    RMW_IMPLEMENTATION variable might be already set with Ansible/RMW Implementation.

    Check and remove the duplicate line if necessary.

    "},{"location":"installation/additional-settings-for-developers/network-configuration/dds-settings/#additional-information","title":"Additional information","text":""},{"location":"installation/additional-settings-for-developers/network-configuration/dds-settings/#about-ros_localhost_only-environment-variable","title":"About ROS_LOCALHOST_ONLY environment variable","text":"

    Previously, we used to set export ROS_LOCALHOST_ONLY=1 to enable localhost-only communication. But because of an ongoing issue, this method doesn't work.

    Warning

    Do not set export ROS_LOCALHOST_ONLY=1 in ~/.bashrc.

    If you do so, it will cause an error with RMW.

    Remove it from ~/.bashrc if you have set it.

    "},{"location":"installation/additional-settings-for-developers/network-configuration/dds-settings/#about-ros_domain_id-environment-variable","title":"About ROS_DOMAIN_ID environment variable","text":"

    We can also set export ROS_DOMAIN_ID=3(or any number 1 to 255) (0 by default) to avoid interference with other ROS 2 nodes on the same network.

    But since 255 is a very small number, it might interfere with other computers on the same network unless you make sure everyone has a unique domain ID.

    Another problem is that if someone runs a test that uses ROS 2 launch_testing framework, by default it will use a random domain ID to isolate between tests even on the same machine. See this PR for more details.

    "},{"location":"installation/additional-settings-for-developers/network-configuration/enable-multicast-for-lo/","title":"Enable `multicast` on `lo`","text":""},{"location":"installation/additional-settings-for-developers/network-configuration/enable-multicast-for-lo/#enable-multicast-on-lo","title":"Enable multicast on lo","text":""},{"location":"installation/additional-settings-for-developers/network-configuration/enable-multicast-for-lo/#manually-temporary-solution","title":"Manually (temporary solution)","text":"

    You may just call the following command to enable multicast on the loopback interface.

    sudo ip link set lo multicast on\n

    Warning

    This will be reverted once the computer restarts. To make it permanent, follow the steps below.

    Note

    Here, lo is the loopback interface.

    You can check the interfaces with ip link show.

    You may change lo with the interface you want to enable multicast on.

    "},{"location":"installation/additional-settings-for-developers/network-configuration/enable-multicast-for-lo/#on-startup-with-a-service-permanent-solution","title":"On startup with a service (permanent solution)","text":"
    sudo nano /etc/systemd/system/multicast-lo.service\n

    Paste the following into the file:

    [Unit]\nDescription=Enable Multicast on Loopback\n\n[Service]\nType=oneshot\nExecStart=/usr/sbin/ip link set lo multicast on\n\n[Install]\nWantedBy=multi-user.target\n

    Press following in order to save with nano:

    1. Ctrl+X
    2. Y
    3. Enter
    # Make it recognized\nsudo systemctl daemon-reload\n\n# Make it run on startup\nsudo systemctl enable multicast-lo.service\n\n# Start it now\nsudo systemctl start multicast-lo.service\n
    "},{"location":"installation/additional-settings-for-developers/network-configuration/enable-multicast-for-lo/#validate","title":"Validate","text":"
    you@pc:~$ sudo systemctl status multicast-lo.service\n\u25cb multicast-lo.service - Enable Multicast on Loopback\n     Loaded: loaded (/etc/systemd/system/multicast-lo.service; enabled; vendor preset: enabled)\n     Active: inactive (dead) since Mon 2024-07-08 12:54:17 +03; 4s ago\n    Process: 22588 ExecStart=/usr/bin/ip link set lo multicast on (code=exited, status=0/SUCCESS)\n   Main PID: 22588 (code=exited, status=0/SUCCESS)\n        CPU: 1ms\n\nTem 08 12:54:17 mfc-leo systemd[1]: Starting Enable Multicast on Loopback...\nTem 08 12:54:17 mfc-leo systemd[1]: multicast-lo.service: Deactivated successfully.\nTem 08 12:54:17 mfc-leo systemd[1]: Finished Enable Multicast on Loopback.\n
    you@pc:~$ ip link show lo\n1: lo: <LOOPBACK,MULTICAST,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\n    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00\n
    "},{"location":"installation/additional-settings-for-developers/network-configuration/enable-multicast-for-lo/#uninstalling-the-service","title":"Uninstalling the service","text":"

    If for some reason you want to uninstall the service, you can do so by following these steps:

    # Stop the service\nsudo systemctl stop multicast-lo.service\n\n# Disable the service from running on startup\nsudo systemctl disable multicast-lo.service\n\n# Remove the service file\nsudo rm /etc/systemd/system/multicast-lo.service\n\n# Reload systemd to apply the changes\nsudo systemctl daemon-reload\n
    "},{"location":"installation/additional-settings-for-developers/network-configuration/multiple-computers/","title":"Communicating across multiple computers with CycloneDDS","text":""},{"location":"installation/additional-settings-for-developers/network-configuration/multiple-computers/#communicating-across-multiple-computers-with-cyclonedds","title":"Communicating across multiple computers with CycloneDDS","text":""},{"location":"installation/additional-settings-for-developers/network-configuration/multiple-computers/#configuring-cyclonedds","title":"Configuring CycloneDDS","text":"

    Within the ~/cyclonedds.xml file, Interfaces section can be set in various ways to communicate across multiple computers within a network.

    "},{"location":"installation/additional-settings-for-developers/network-configuration/multiple-computers/#automatically-determine-the-network-interface-convenient","title":"Automatically determine the network interface (convenient)","text":"

    With this setting, CycloneDDS will automatically determine the most suitable network interface to use.

    <Interfaces>\n<NetworkInterface autodetermine=\"true\" priority=\"default\" multicast=\"default\" />\n</Interfaces>\n
    "},{"location":"installation/additional-settings-for-developers/network-configuration/multiple-computers/#manually-set-the-network-interface-recommended","title":"Manually set the network interface (recommended)","text":"

    With this setting, you can manually set the network interface to use.

    <Interfaces>\n<NetworkInterface autodetermine=\"false\" name=\"enp38s0\" priority=\"default\" multicast=\"default\" />\n</Interfaces>\n

    Warning

    You should replace enp38s0 with the actual network interface name.

    Note

    ifconfig command can be used to find the network interface name.

    "},{"location":"installation/additional-settings-for-developers/network-configuration/multiple-computers/#time-synchronization","title":"Time synchronization","text":"

    To ensure that the nodes on different computers are synchronized, you should synchronize the time between the computers.

    You can use the chrony to synchronize the time between computers.

    Please refer to this post for more information: Multi PC AWSIM + Autoware Tests #3813

    Warning

    Under Construction

    "},{"location":"installation/autoware/docker-installation/","title":"Open AD Kit: containerized workloads for Autoware","text":""},{"location":"installation/autoware/docker-installation/#open-ad-kit-containerized-workloads-for-autoware","title":"Open AD Kit: containerized workloads for Autoware","text":"

    Open AD Kit offers two types of Docker image to let you get started with Autoware quickly: devel and runtime.

    1. The devel image enables you to develop Autoware without setting up the local development environment.
    2. The runtime image contains only runtime executables and enables you to try out Autoware quickly.

    Info

    Before proceeding, confirm and agree with the NVIDIA Deep Learning Container license. By pulling and using the Autoware Open AD Kit images, you accept the terms and conditions of the license.

    "},{"location":"installation/autoware/docker-installation/#prerequisites","title":"Prerequisites","text":"
    • Docker
    • NVIDIA Container Toolkit (preferred)
    • NVIDIA CUDA 12 compatible GPU Driver (preferred)
    1. Clone autowarefoundation/autoware and move to the directory.

      git clone https://github.com/autowarefoundation/autoware.git\ncd autoware\n
    2. The setup script will install all required dependencies with:

      ./setup-dev-env.sh -y docker\n

      To install without NVIDIA GPU support:

      ./setup-dev-env.sh -y --no-nvidia docker\n

    Info

    GPU acceleration is required for some features such as object detection and traffic light detection/classification. For details of how to enable these features without a GPU, refer to the Running Autoware without CUDA.

    "},{"location":"installation/autoware/docker-installation/#usage","title":"Usage","text":""},{"location":"installation/autoware/docker-installation/#runtime","title":"Runtime","text":"

    You can use run.sh to run the Autoware runtime container with the map data:

    ./docker/run.sh --map-path path_to_map_data\n

    For more launch options, you can append a custom launch command instead of using the default launch command which is ros2 launch autoware_launch autoware.launch.xml.

    Here is an example of running the runtime container with a custom launch command:

    ./docker/run.sh --map-path ~/autoware_map/sample-map-rosbag ros2 launch autoware_launch planning_simulator.launch.xml map_path:=/autoware_map vehicle_model:=sample_vehicle sensor_model:=sample_sensor_kit\n

    Info

    You can use --no-nvidia to run without NVIDIA GPU support, and --headless to run without display that means no RViz visualization.

    "},{"location":"installation/autoware/docker-installation/#run-the-autoware-tutorials","title":"Run the Autoware tutorials","text":"

    Inside the container, you can run the Autoware tutorials by following these links:

    Planning Simulation

    Rosbag Replay Simulation.

    "},{"location":"installation/autoware/docker-installation/#development-environment","title":"Development environment","text":"
    ./docker/run.sh --devel\n

    Info

    By default workspace mounted on the container will be current directory(pwd), you can change the workspace path by --workspace path_to_workspace. For development environments without NVIDIA GPU support use --no-nvidia.

    "},{"location":"installation/autoware/docker-installation/#how-to-set-up-a-workspace","title":"How to set up a workspace","text":"
    1. Create the src directory and clone repositories into it.

      mkdir src\nvcs import src < autoware.repos\n

      If you are an active developer, you may also want to pull the nightly repositories, which contain the latest updates:

      vcs import src < autoware-nightly.repos\n

      \u26a0\ufe0f Note: The nightly repositories are unstable and may contain bugs. Use them with caution.

    2. Update dependent ROS packages.

      The dependencies of Autoware may have changed after the Docker image was created. In that case, you need to run the following commands to update the dependencies.

      # Make sure all ros-$ROS_DISTRO-* packages are upgraded to their latest version\nsudo apt update && sudo apt upgrade\nrosdep update\nrosdep install -y --from-paths src --ignore-src --rosdistro $ROS_DISTRO\n
    3. Build the workspace.

      colcon build --symlink-install --cmake-args -DCMAKE_BUILD_TYPE=Release\n

      If there is any build issue, refer to Troubleshooting.

    To Update the Workspace

    cd autoware\ngit pull\nvcs import src < autoware.repos\n\n# If you are using nightly repositories, also run the following command:\nvcs import src < autoware-nightly.repos\n\nvcs pull src\n# Make sure all ros-$ROS_DISTRO-* packages are upgraded to their latest version\nsudo apt update && sudo apt upgrade\nrosdep update\nrosdep install -y --from-paths src --ignore-src --rosdistro $ROS_DISTRO\n

    It might be the case that dependencies imported via vcs import have been moved/removed. VCStool does not currently handle those cases, so if builds fail after vcs import, cleaning and re-importing all dependencies may be necessary:

    rm -rf src/*\nvcs import src < autoware.repos\n# If you are using nightly repositories, import them as well.\nvcs import src < autoware-nightly.repos\n
    "},{"location":"installation/autoware/docker-installation/#using-vs-code-remote-containers-for-development","title":"Using VS Code remote containers for development","text":"

    Using the Visual Studio Code with the Remote - Containers extension, you can develop Autoware in the containerized environment with ease.

    Get the Visual Studio Code's Remote - Containers extension. And reopen the workspace in the container by selecting Remote-Containers: Reopen in Container from the Command Palette (F1).

    You can choose Autoware or Autoware-cuda image to develop with or without CUDA support.

    "},{"location":"installation/autoware/docker-installation/#building-docker-images-from-scratch","title":"Building Docker images from scratch","text":"

    If you want to build these images locally for development purposes, run the following command:

    cd autoware/\n./docker/build.sh\n

    To build without CUDA, use the --no-cuda option:

    ./docker/build.sh --no-cuda\n

    To build only development image, use the --devel-only option:

    ./docker/build.sh --devel-only\n

    To specify the platform, use the --platform option:

    ./docker/build.sh --platform linux/amd64\n./docker/build.sh --platform linux/arm64\n
    "},{"location":"installation/autoware/docker-installation/#using-docker-images-other-than-latest","title":"Using Docker images other than latest","text":"

    There are also images versioned based on the date or release tag. Use them when you need a fixed version of the image.

    The list of versions can be found here.

    "},{"location":"installation/autoware/source-installation/","title":"Source installation","text":""},{"location":"installation/autoware/source-installation/#source-installation","title":"Source installation","text":""},{"location":"installation/autoware/source-installation/#prerequisites","title":"Prerequisites","text":"
    • OS

      • Ubuntu 22.04
    • ROS

      • ROS 2 Humble

      For ROS 2 system dependencies, refer to REP-2000.

    • Git
      • Registering SSH keys to GitHub is preferable.
    sudo apt-get -y update\nsudo apt-get -y install git\n

    Note: If you wish to use ROS 2 Galactic on Ubuntu 20.04, refer to installation instruction from galactic branch, but be aware that Galactic version of Autoware might not have the latest features.

    "},{"location":"installation/autoware/source-installation/#how-to-set-up-a-development-environment","title":"How to set up a development environment","text":"
    1. Clone autowarefoundation/autoware and move to the directory.

      git clone https://github.com/autowarefoundation/autoware.git\ncd autoware\n
    2. If you are installing Autoware for the first time, you can automatically install the dependencies by using the provided Ansible script.

      ./setup-dev-env.sh\n

      If you encounter any build issues, please consult the Troubleshooting section for assistance.

    Info

    Before installing NVIDIA libraries, please ensure that you have reviewed and agreed to the licenses.

    • CUDA
    • cuDNN
    • TensorRT

    Note

    The following items will be automatically installed. If the ansible script doesn't work or if you already have different versions of dependent libraries installed, please install the following items manually.

    • Install Ansible
    • Install Build Tools
    • Install Dev Tools
    • Install gdown
    • Install geographiclib
    • Install pacmod
    • Install the RMW Implementation
    • Install ROS 2
    • Install ROS 2 Dev Tools
    • Install Nvidia CUDA
    • Install Nvidia cuDNN and TensorRT
    • Install the Autoware RViz Theme (only affects Autoware RViz)
    • Download the Artifacts (for perception inference)
    "},{"location":"installation/autoware/source-installation/#how-to-set-up-a-workspace","title":"How to set up a workspace","text":"

    Using Autoware Build GUI

    If you prefer a graphical user interface (GUI) over the command line for launching and managing your simulations, refer to the Using Autoware Build GUI section at the end of this document for a step-by-step guide.

    1. Create the src directory and clone repositories into it.

      Autoware uses vcstool to construct workspaces.

      cd autoware\nmkdir src\nvcs import src < autoware.repos\n

      If you are an active developer, you may also want to pull the nightly repositories, which contain the latest updates:

      vcs import src < autoware-nightly.repos\n

      \u26a0\ufe0f Note: The nightly repositories are unstable and may contain bugs. Use them with caution.

    2. Install dependent ROS packages.

      Autoware requires some ROS 2 packages in addition to the core components. The tool rosdep allows an automatic search and installation of such dependencies. You might need to run rosdep update before rosdep install.

      source /opt/ros/humble/setup.bash\n# Make sure all previously installed ros-$ROS_DISTRO-* packages are upgraded to their latest version\nsudo apt update && sudo apt upgrade\nrosdep update\nrosdep install -y --from-paths src --ignore-src --rosdistro $ROS_DISTRO\n
    3. Install and set up ccache to speed up consecutive builds. (optional but highly recommended)

    4. Build the workspace.

      Autoware uses colcon to build workspaces. For more advanced options, refer to the documentation.

      colcon build --symlink-install --cmake-args -DCMAKE_BUILD_TYPE=Release\n

      If there is any build issue, refer to Troubleshooting.

    5. Follow the steps in Network Configuration before running Autoware.

    6. Apply the settings recommended in Console settings for ROS 2 for a better development experience. (optional)

    "},{"location":"installation/autoware/source-installation/#how-to-update-a-workspace","title":"How to update a workspace","text":"
    1. Update the .repos file.

      cd autoware\ngit pull <remote> <your branch>\n

      <remote> is usually git@github.com:autowarefoundation/autoware.git

    2. Update the repositories.

      vcs import src < autoware.repos\n

      \u26a0\ufe0f If you are using nightly repositories, you can also update them.

      vcs import src < autoware-nightly.repos\n
      vcs pull src\n

      For Git users:

      • vcs import is similar to git checkout.
        • Note that it doesn't pull from the remote.
      • vcs pull is similar to git pull.
        • Note that it doesn't switch branches.

      For more information, refer to the official documentation.

      It might be the case that dependencies imported via vcs import have been moved/removed. VCStool does not currently handle those cases, so if builds fail after vcs import, cleaning and re-importing all dependencies may be necessary:

      rm -rf src/*\nvcs import src < autoware.repos\n

      \u26a0\ufe0f If you are using nightly repositories, import them as well.

      vcs import src < autoware-nightly.repos\n
    3. Install dependent ROS packages.

      source /opt/ros/humble/setup.bash\n# Make sure all previously installed ros-$ROS_DISTRO-* packages are upgraded to their latest version\nsudo apt update && sudo apt upgrade\nrosdep update\nrosdep install -y --from-paths src --ignore-src --rosdistro $ROS_DISTRO\n
    4. Build the workspace.

      colcon build --symlink-install --cmake-args -DCMAKE_BUILD_TYPE=Release\n
    "},{"location":"installation/autoware/source-installation/#using-autoware-build-gui","title":"Using Autoware Build GUI","text":"

    In addition to the traditional command-line methods of building Autoware packages, developers and users can leverage the Autoware Build GUI for a more streamlined and user-friendly experience. This GUI application simplifies the process of building and managing Autoware packages.

    "},{"location":"installation/autoware/source-installation/#integration-with-autoware-source-installation","title":"Integration with Autoware source installation","text":"

    When using the Autoware Build GUI in conjunction with the traditional source installation process:

    • Initial Setup: Follow the standard Autoware source installation guide to set up your environment and workspace.
    • Using the GUI: Once the initial setup is complete, you can use the Autoware Build GUI to manage subsequent builds and package updates.

    This integration offers a more accessible approach to building and managing Autoware packages, catering to both new users and experienced developers.

    "},{"location":"installation/autoware/source-installation/#getting-started-with-autoware-build-gui","title":"Getting started with Autoware Build GUI","text":"
    1. Installation: Ensure you have installed the Autoware Build GUI. Installation instructions.
    2. Launching the App: Once installed, launch the Autoware Build GUI.
    3. Setting Up: Set the path to your Autoware folder within the GUI.
    4. Building Packages: Select the Autoware packages you wish to build and manage the build process through the GUI.

      4.1. Build Configuration: Choose from a list of default build configurations, or select the packages you wish to build manually.

      4.2. Build Options: Choose which build type you wish to use, with ability to specify additional build options.

    5. Save and Load: Save your build configurations for future use, or load a previously saved configuration if you don't wish to build all packages or use one of the default configurations provided.

    6. Updating Workspace: Update your Autoware workspace's packages to the latest version using the GUI or adding Calibration tools to the workspace.
    "},{"location":"installation/related-tools/","title":"Installation of related tools","text":""},{"location":"installation/related-tools/#installation-of-related-tools","title":"Installation of related tools","text":"

    Warning

    Under Construction

    "},{"location":"models/","title":"Machine learning models","text":""},{"location":"models/#machine-learning-models","title":"Machine learning models","text":"

    The Autoware perception stack uses models for inference. These models are automatically downloaded as part of the setup-dev-env.sh script.

    The models are hosted by Web.Auto.

    Default models directory (data_dir) is ~/autoware_data.

    "},{"location":"models/#download-instructions","title":"Download instructions","text":"

    Please follow the download instruction in autoware download instructions for updated models downloading.

    The models can be also downloaded manually using download tools such as wget or curl. The latest urls of weight files and param files for each model can be found at autoware main.yaml file

    The example of downloading lidar_centerpoint model:

    # lidar_centerpoint\n\n$ mkdir -p ~/autoware_data/lidar_centerpoint/\n$ wget -P ~/autoware_data/lidar_centerpoint/ \\\nhttps://awf.ml.dev.web.auto/perception/models/centerpoint/v2/pts_voxel_encoder_centerpoint.onnx \\\nhttps://awf.ml.dev.web.auto/perception/models/centerpoint/v2/pts_backbone_neck_head_centerpoint.onnx \\\nhttps://awf.ml.dev.web.auto/perception/models/centerpoint/v2/pts_voxel_encoder_centerpoint_tiny.onnx \\\nhttps://awf.ml.dev.web.auto/perception/models/centerpoint/v2/pts_backbone_neck_head_centerpoint_tiny.onnx \\\nhttps://awf.ml.dev.web.auto/perception/models/centerpoint/v2/centerpoint_ml_package.param.yaml \\\nhttps://awf.ml.dev.web.auto/perception/models/centerpoint/v2/centerpoint_tiny_ml_package.param.yaml \\\nhttps://awf.ml.dev.web.auto/perception/models/centerpoint/v2/centerpoint_sigma_ml_package.param.yaml \\\nhttps://awf.ml.dev.web.auto/perception/models/centerpoint/v2/detection_class_remapper.param.yaml \\\nhttps://awf.ml.dev.web.auto/perception/models/centerpoint/v2/deploy_metadata.yaml\n
    "},{"location":"reference-hw/","title":"Reference HW Design","text":""},{"location":"reference-hw/#reference-hw-design","title":"Reference HW Design","text":"

    This document is created to describe and give additional information of the sensors and systems supported by Autoware.Universe software.

    All equipment listed in this document has available ROS 2 drivers and has been tested by one or more of the community members on field in autonomous vehicle and robotics applications.

    The listed sensors and systems are not sold, developed or given direct technical support by the Autoware community. Having said that any ROS 2 and Autoware related issue regarding the hardware usage could be asked using the community guidelines which found here.

    The documents consists of the sections listed below:

    • AD COMPUTERs

      • ADLINK In-Vehicle Computers
        • NXP In-Vehicle Computers
        • Neousys In-Vehicle Computers
        • Crystal Rugged In-Vehicle Computers
    • LiDARs

      • Velodyne 3D LiDAR Sensors
      • Robosense 3D LiDAR Sensors
      • HESAI 3D LiDAR Sensors
      • Leishen 3D LiDAR Sensors
      • Livox 3D LiDAR Sensors
      • Ouster 3D LiDAR Sensors
    • RADARs

      • Smartmicro Automotive Radars
      • Aptiv Automotive Radars
      • Continental Engineering Radars
    • CAMERAs

      • FLIR Machine Vision Cameras
      • Lucid Vision Cameras
      • Allied Vision Cameras
      • Tier IV Cameras
      • Neousys Technology Cameras
    • Thermal CAMERAs

      • FLIR Thermal Automotive Dev. Kit
    • IMU, AHRS & GNSS/INS

      • NovAtel GNSS/INS Sensors
      • XSens GNSS/INS & IMU Sensors
      • SBG GNSS/INS & IMU Sensors
      • Applanix GNSS/INS Sensors
      • PolyExplore GNSS/INS Sensors
      • Fix Position GNSS/INS Sensors
    • Vehicle Drive By Wire Suppliers

      • Dataspeed DBW Solutions
      • AStuff Pacmod DBW Solutions
      • Schaeffler-Paravan Space Drive DBW Solutions
    • Vehicle Platform Suppliers

      • PIX MOVING Autonomous Vehicle Solutions
      • Autonomoustuff AV Solutions
      • NAVYA AV Solutions
    • Remote Drive

      • FORT ROBOTICS
      • LOGITECH
    • Full Drivers List
    • AD Sensor Kit Suppliers

      • LEO Drive AD Sensor Kit
      • TIER IV AD Kit
      • RoboSense AD Sensor Kit
    "},{"location":"reference-hw/ad-computers/","title":"AD Computers","text":""},{"location":"reference-hw/ad-computers/#ad-computers","title":"AD Computers","text":""},{"location":"reference-hw/ad-computers/#adlink-in-vehicle-computers","title":"ADLINK In-Vehicle Computers","text":"

    ADLINK solutions which is used for autonomous driving and tested by one or more community members are listed below:

    Supported Products List CPU GPU RAM, Interfaces Environmental Autoware Tested (Y/N) AVA-3510 Intel\u00ae Xeon\u00ae E-2278GE Dual MXM RTX 5000 64GB RAM,CAN, USB, 10G Ethernet, DIO, Hot-Swap SSD, USIM 9~36 VDC, MIL-STD-810H,ISO 7637-2 Y SOAFEE\u2019s AVA Developer Platform Ampere Altra ARMv8 optional USB, Ethernet, DIO, M.2 NVMe SSDs 110/220 AC Y RQX-58G 8-core Arm Nvidia Jetson AGX Xavier USB, Ethernet, M.2 NVME SSD, CAN, USIM, GMSL2 Camera support 9~36VDC, IEC 60068-2-64: Operating 3Grms, 5-500 Hz, 3 axes Y RQX-59G 8-core Arm Nvidia Jetson AGX Orin USB, Ethernet, M.2 NVME SSD, CAN, USIM, GMSL2 Camera support 9~36VDC, IEC 60068-2-64: Operating 3Grms, 5-500 Hz, 3 axes -

    Link to company website is here.

    "},{"location":"reference-hw/ad-computers/#nxp-in-vehicle-computers","title":"NXP In-Vehicle Computers","text":"

    NXP solutions which is used for autonomous driving and tested by one or more community members are listed below:

    Supported Products List CPU GPU RAM, Interfaces Environmental Autoware Tested (Y/N) BLUEBOX 3.0 16 x Arm\u00ae Cortex\u00ae-A72 Dual RTX 8000 or RTX A6000 16 GB RAM CAN, FlexRay, USB, Ethernet, DIO, SSD ASIL-D -

    Link to company website is here.

    "},{"location":"reference-hw/ad-computers/#neousys-in-vehicle-computers","title":"Neousys In-Vehicle Computers","text":"

    Neousys solutions which is used for autonomous driving and tested by one or more community members are listed below:

    Supported Products List CPU GPU RAM, Interfaces Environmental Autoware Tested (Y/N) 8805-GC AMD\u00ae EPYC\u2122 7003 NVIDIA\u00ae RTX A6000/ A4500 512GB CAN, USB, Ethernet, Serial, Easy-Swap SSD 8-48 Volt, Vibration:MIL-STD810G, Method 514.6, Category 4 Y 10208-GC Intel\u00ae 13th/12th-Gen Core\u2122 Dual 350W NVIDIA\u00ae RTX GPU 64GB CAN, USB, Ethernet, Serial, M2 NVMe SSD 8~48 Volt, Vibration: MIL-STD-810H, Method 514.8, Category 4 Y 9160-GC Intel\u00ae 13th/12th-Gen Core\u2122 NVIDIA\u00ae RTX series up to 130W TDP 64GB CAN, USB, Ethernet, PoE, Serial, two 2.5\" SATA HDD/SSD with RAID, M2 NVMe SSD 8~48, Vibration: Volt,MIL-STD-810G, Method 514.6, Category 4 -

    Link to company website is here.

    "},{"location":"reference-hw/ad-computers/#crystal-rugged-in-vehicle-computers","title":"Crystal Rugged In-Vehicle Computers","text":"

    Crystal Rugged solutions which is used for autonomous driving and tested by one or more community members are listed below:

    Supported Products List CPU GPU RAM, Interfaces Environmental Autoware Tested (Y/N) AVC 0161-AC Intel\u00ae Xeon\u00ae Scalable Dual GPU RTX Series 2TB RAM,CAN, USB, Ethernet, Serial, Hot-Swap SSD 10-32 VoltVibration:2 G RMS 10-1000 Hz, 3 axes - AVC0403 Intel\u00ae Xeon\u00ae Scalable or AMD EPYC\u2122 Optional (5 GPU) 2TB RAM, CAN, USB, Ethernet, Serial, Hot-Swap SSD 10-32 Volt, Vibration: 2 G RMS 10-1000 Hz, 3 axes - AVC1322 Intel\u00ae Xeon\u00ae D-1718T or Gen 12/13 Core\u2122 i3/i5/i7 NVIDIA\u00ae Jetson AGX Orin 128 GB DDR4 RAM, USB, Ethernet, Serial, SATA 2.5\u201d SSD 10-36 Volt, Vibration: 5.5g, 5-2,000Hz, 60 min/axis, 3 axis - AVC1753 10th Generation Intel\u00ae Core\u2122 and Xeon\u00ae Optional (1 GPU) 128 GB DDR4 RAM, USB, Ethernet, NVMe U.2 SSD/ 3 SATA SSD 8-36 VDC/ 120-240VAC 50/60Hz, Vibration: 5.5g, 5-2,000Hz, 60 min/axis, 3 axis -

    Link to company website is here.

    "},{"location":"reference-hw/ad-computers/#vecow-in-vehicle-computers","title":"Vecow In-Vehicle Computers","text":"

    Vecow solutions which is used for autonomous driving and tested by one or more community members are listed below:

    Supported Products List CPU GPU RAM, Interfaces Environmental Autoware Tested (Y/N) ECX-3800 PEG Intel\u00ae 13th/12th-Gen Core\u2122 200W power of NVIDIA\u00ae or AMD graphics 64GB RAM, CAN, USB, Ethernet, PoE, Serial, M.2/SATA SSD, SIM Card 12-50 Volt, Vibration:MIL-STD810G, Procedure I, 20\u00b0C to 45\u00b0C - IVX-1000 Intel\u00ae 13th/12th-Gen Core\u2122 NVIDIA Quadro\u00ae MXM Graphics 64GB RAM, Ethernet, PoE, Serial, M.2/SATA/mSATA SSD, SIM Card 16-160 Volt, Vibration: IEC 61373 : 2010, 40\u00b0C to 85\u00b0C -

    Link to company website is here.

    "},{"location":"reference-hw/ad_sensor_kit_suppliers/","title":"AD Sensor Kit Suppliers","text":""},{"location":"reference-hw/ad_sensor_kit_suppliers/#ad-sensor-kit-suppliers","title":"AD Sensor Kit Suppliers","text":""},{"location":"reference-hw/ad_sensor_kit_suppliers/#leo-drive-ad-sensor-kit","title":"LEO Drive AD Sensor Kit","text":"

    LEO Drive Autonomy Essentials Kit contents are listed below:

    Supported Products List Camera Lidar GNSS/INS ROS 2 Support Autoware Tested (Y/N) Autonomy Essentials Kit 8x Lucid Vision TRI054S 4x Velodyne Puck1x Velodyne Alpha Prime1x RoboSense Bpearl 1x SBG Ellipse-D Y Y

    Link to company website: https://leodrive.ai/

    "},{"location":"reference-hw/ad_sensor_kit_suppliers/#tier-iv-ad-kit","title":"TIER IV AD Kit","text":"

    TIER IV sensor fusion system contents are listed below:

    Supported Products List Camera Lidar ECU ROS 2 Support Autoware Tested (Y/N) TIER IV ADK TIER IV C1, C2 HESAI (AT-128,XT-32)Velodyne ADLINK (RQX-58G, AVA-3510) Y Y

    Link to company website: https://sensor.tier4.jp/sensor-fusion-system

    "},{"location":"reference-hw/ad_sensor_kit_suppliers/#robosense-ad-sensor-kit","title":"RoboSense AD Sensor Kit","text":"

    RoboSense L4 sensor fusion solution system contents are listed below:

    Supported Products List Camera Lidar ECU ROS 2 Support Autoware Tested (Y/N) P6 - 4x Automotive Grade Solid-state Lidar Optional - -

    Link to company website: https://www.robosense.ai/en/rslidar/RS-Fusion-P6

    "},{"location":"reference-hw/cameras/","title":"CAMERAs","text":""},{"location":"reference-hw/cameras/#cameras","title":"CAMERAs","text":""},{"location":"reference-hw/cameras/#tier-iv-automotive-hdr-cameras","title":"TIER IV Automotive HDR Cameras","text":"

    TIER IV's Automotive HDR cameras which have ROS 2 driver and tested by TIER IV are listed below:

    Supported Products List MP FPS Interface HDR LFM Trigger /Synchronization Ingress Protection ROS 2 Driver Autoware Tested (Y/N) C1 2.5 30 GMSL2 / USB3 Y (120dB) Y Y IP69K Y Y C2 5.4 30 GMSL2 / USB3 Y (120dB) Y Y IP69K Y Y C3 (to be released in 2024) 8.3 30 GMSL2 / TBD Y (120dB) Y Y IP69K Y Y

    Link to ROS 2 driver: https://github.com/tier4/ros2_v4l2_camera

    Link to product support site: TIER IV Edge.Auto documentation

    Link to product web site: TIER IV Automotive Camera Solution

    "},{"location":"reference-hw/cameras/#flir-machine-vision-cameras","title":"FLIR Machine Vision Cameras","text":"

    FLIR Machine Vision cameras which has ROS 2 driver and tested by one or more community members are listed below:

    Supported Products List MP FPS Interface HDR LFM Trigger /Synchronization Ingress Protection ROS 2 Driver Autoware Tested (Y/N) Blackfly S 2.0 5.0 22 95 USB-GigE N/A N/A Y N/A Y - Grasshopper3 2.3 5.0 26 90 USB-GigE N/A N/A Y N/A Y -

    Link to ROS 2 driver: https://github.com/berndpfrommer/flir_spinnaker_ros2

    Link to company website: https://www.flir.eu/iis/machine-vision/

    "},{"location":"reference-hw/cameras/#lucid-vision-cameras","title":"Lucid Vision Cameras","text":"

    Lucid Vision cameras which has ROS 2 driver and tested by one or more community members are listed below:

    Supported Products List MP FPS Interface HDR LFM Trigger /Synchronization Ingress Protection ROS 2 Driver Autoware Tested (Y/N) TRITON 054S 5.4 22 GigE Y Y Y up to IP67 Y Y TRITON 032S 3.2 35.4 GigE N/A N/A Y up to IP67 Y Y

    Link to ROS 2 driver: https://gitlab.com/leo-drive/Drivers/arena_camera Link to company website: https://thinklucid.com/triton-gige-machine-vision/

    "},{"location":"reference-hw/cameras/#allied-vision-cameras","title":"Allied Vision Cameras","text":"

    Allied Vision cameras which has ROS 2 driver and tested by one or more community members are listed below:

    Supported Products List MP FPS Interface HDR LFM Trigger /Synchronization Ingress Protection ROS 2 Driver Autoware Tested (Y/N) Mako G319 3.2 37.6 GigE N/A N/A Y N/A Y -

    Link to ROS 2 driver: https://github.com/neil-rti/avt_vimba_camera

    Link to company website: https://www.alliedvision.com/en/products/camera-series/mako-g

    "},{"location":"reference-hw/cameras/#neousys-technology-camera","title":"Neousys Technology Camera","text":"

    Neousys Technology cameras which has ROS 2 driver and tested by one or more community members are listed below:

    Supported Products List MP FPS Interface Sensor Format Lens ROS 2 Driver Autoware Tested (Y/N) AC-IMX390 2.0 30 GMSL2 (over PCIe-GL26 Grabber Card) 1/2.7\u201d 5-axis active adjustment with adhesive dispense Y Y

    Link to ROS 2 driver: https://github.com/ros-drivers/gscam

    Link to company website: https://www.neousys-tech.com/en/

    "},{"location":"reference-hw/full_drivers_list/","title":"Drivers List","text":""},{"location":"reference-hw/full_drivers_list/#drivers-list","title":"Drivers List","text":"

    The list of all drivers listed above for easy access as a table with additional information:

    Type Maker Driver links License Maintainer Lidar VelodyneHesai Link Apache 2 david.wong@tier4.jpabraham.monrroy@map4.jp Lidar Velodyne Link BSD jwhitley@autonomoustuff.com Lidar Robosense Link BSD zdxiao@robosense.cn Lidar Hesai Link Apache 2 wuxiaozhou@hesaitech.com Lidar Leishen Link - - Lidar Livox Link MIT dev@livoxtech.com Lidar Ouster Link Apache 2 stevenmacenski@gmail.comtom@boxrobotics.ai Radar smartmicro Link Apache 2 opensource@smartmicro.de Radar Continental Engineering Link Apache 2 abraham.monrroy@tier4.jpsatoshi.tanaka@tier4.jp Camera Flir Link Apache 2 bernd.pfrommer@gmail.com Camera Lucid Vision Link - kcolak@leodrive.ai Camera Allied Vision Link Apache 2 at@email.com Camera Tier IV Link GPL - Camera Neousys Technology Link BSD jbo@jhu.edu GNSS NovAtel Link BSD preed@swri.org GNSS SBG Systems Link MIT support@sbg-systems.com GNSS PolyExplore Link - support@polyexplore.com"},{"location":"reference-hw/imu_ahrs_gnss_ins/","title":"IMU, AHRS & GNSS/INS","text":""},{"location":"reference-hw/imu_ahrs_gnss_ins/#imu-ahrs-gnssins","title":"IMU, AHRS & GNSS/INS","text":""},{"location":"reference-hw/imu_ahrs_gnss_ins/#novatel-gnssins-sensors","title":"NovAtel GNSS/INS Sensors","text":"

    NovAtel GNSS/INS sensors which has ROS 2 driver and tested by one or more community members are listed below:

    Supported Products List INS Rate Roll, Pitch, Yaw Acc. GNSS ROS 2 Driver\u00a0 Autoware Tested (Y/N) PwrPak7D-E2 200 Hz R (0.013\u00b0)P (0.013\u00b0)Y (0.070\u00b0) 20 HzL1 / L2 / L5 555 Channels Y - Span CPT7 200 Hz R (0.01\u00b0)\u00a0P (0.01\u00b0)\u00a0Y (0.03\u00b0) 20 Hz L1 / L2 / L5 555 Channels Y -

    Link to ROS 2 driver: https://github.com/swri-robotics/novatel_gps_driver/tree/dashing-devel

    Link to company website: https://hexagonpositioning.com/

    "},{"location":"reference-hw/imu_ahrs_gnss_ins/#xsens-gnssins-imu-sensors","title":"XSens GNSS/INS & IMU Sensors","text":"

    XSens GNSS/INS sensors which has ROS 2 driver and tested by one or more community members are listed below:

    Supported Products List INS/IMU Rate Roll, Pitch, Yaw Acc. GNSS ROS 2 Driver\u00a0 Autoware Tested (Y/N) MTi-680G 2 kHz R (0.2\u00b0)P (0.2\u00b0)Y (0.5\u00b0) 5 HzL1 / L2\u00a0184 Channels Y - MTi-300 AHRS 2 kHz R (0.2\u00b0)P (0.2\u00b0)Y (1\u00b0) Not Applicable Y -

    Link to ROS 2 driver: http://wiki.ros.org/xsens_mti_driver

    Link to company website: https://www.xsens.com/

    "},{"location":"reference-hw/imu_ahrs_gnss_ins/#sbg-gnssins-imu-sensors","title":"SBG GNSS/INS & IMU Sensors","text":"

    SBG GNSS/INS sensors which has ROS 2 driver and tested by one or more community members are listed below:

    Supported Products List INS/IMU Rate Roll, Pitch, Yaw Acc. GNSS ROS 2 Driver\u00a0 Autoware Tested (Y/N) Ellipse-D 200 Hz, 1 kHz (IMU) R (0.1\u00b0)P (0.1\u00b0)Y (0.05\u00b0) 5 HzL1 / L2184 Channels Y Y Ellipse-A (AHRS) 200 Hz, 1 kHz (IMU) R (0.1\u00b0)P (0.1\u00b0)Y (0.8\u00b0) Not Applicable Y -

    Link to ROS 2 driver: https://github.com/SBG-Systems/sbg_ros2

    Link to company website: https://www.sbg-systems.com/products/ellipse-series/

    "},{"location":"reference-hw/imu_ahrs_gnss_ins/#applanix-gnssins-sensors","title":"Applanix GNSS/INS Sensors","text":"

    SBG GNSS/INS sensors which has ROS 2 driver and tested by one or more community members are listed below:

    Supported Products List INS/IMU Rate Roll, Pitch, Yaw Acc. GNSS ROS 2 Driver\u00a0 Autoware Tested (Y/N) POSLVX 200 Hz R (0.03\u00b0)P (0.03\u00b0)Y (0.09\u00b0) L1 / L2 / L5336 Channels Y Y POSLV220 200 Hz R (0.02\u00b0)P (0.02\u00b0)Y (0.05\u00b0) L1 / L2 / L5336 Channels Y Y

    Link to ROS 2 driver: http://wiki.ros.org/applanix_driver

    Link to company website: https://www.applanix.com/products/poslv.htm

    "},{"location":"reference-hw/imu_ahrs_gnss_ins/#polyexplore-gnssins-sensors","title":"PolyExplore GNSS/INS Sensors","text":"

    PolyExplore GNSS/INS sensors which has ROS 2 driver and tested by one or more community members are listed below:

    Supported Products List INS/IMU Rate Roll, Pitch, Yaw Acc. GNSS ROS 2 Driver\u00a0 Autoware Tested (Y/N) POLYNAV 2000P 100 Hz R (0.01\u00b0)P (0.01\u00b0)Y (0.1\u00b0) L1 / L2240 Channels Y - POLYNAV 2000S 100 Hz R (0.015\u00b0)P (0.015\u00b0)Y (0.08\u00b0) L1 / L240 Channels Y -

    Link to ROS 2 driver: https://github.com/polyexplore/ROS2_Driver

    Link to company website: https://www.polyexplore.com/

    "},{"location":"reference-hw/imu_ahrs_gnss_ins/#fixposition-visual-gnssins-sensors","title":"Fixposition Visual GNSS/INS Sensors","text":"Supported Products List INS/IMU Rate Roll, Pitch, Yaw Acc. GNSS ROS 2 Driver\u00a0 Autoware Tested (Y/N) Vision-RTK 2 200Hz - 5 HzL1 / L2 Y -

    Link to ROS 2 driver: https://github.com/fixposition/fixposition_driver

    Link to company website: https://www.fixposition.com/

    Additional utilities:

    • Fixposition GNSS transformation lib: https://github.com/fixposition/fixposition_gnss_tf
    • Miscellaneous utilities (logging, software update, ...): https://github.com/fixposition/fixposition_utility
    "},{"location":"reference-hw/lidars/","title":"LIDARs","text":""},{"location":"reference-hw/lidars/#lidars","title":"LIDARs","text":""},{"location":"reference-hw/lidars/#velodyne-3d-lidar-sensors","title":"Velodyne 3D LIDAR Sensors","text":"

    Velodyne Lidars which has ROS 2 driver and tested by one or more community members are listed below:

    Supported Products List Range FOV (V), (H) ROS 2 Driver Autoware Tested (Y/N) Alpha Prime 245m (+15\u00b0)/(-25\u00b0), (360\u00b0) Y Y Ultra Puck 200m (+15\u00b0)/(-25\u00b0), (360\u00b0) Y Y Puck 100m (+15\u00b0)/(-15\u00b0), (360\u00b0) Y Y Puck Hi-res 100m (+10\u00b0)/(-10\u00b0), (360\u00b0) Y Y

    Link to ROS 2 drivers: https://github.com/tier4/nebula https://github.com/ros-drivers/velodyne/tree/ros2/velodyne_pointcloud https://gitlab.com/autowarefoundation/autoware.auto/AutowareAuto/-/tree/master/src/drivers/velodyne_nodes https://github.com/autowarefoundation/awf_velodyne/tree/tier4/universe

    Link to company website: https://velodynelidar.com/

    "},{"location":"reference-hw/lidars/#robosense-3d-lidar-sensors","title":"RoboSense 3D LIDAR Sensors","text":"

    RoboSense Lidars which has ROS 2 driver and tested by one or more community members are listed below:

    Supported Products List Range FOV (V), (H) ROS 2 Driver Autoware Tested (Y/N) M1 200m 25\u00b0/120\u00b0 - - E1 30m 90\u00b0/120\u00b0 - - Bpearl 100m 90\u00b0/360\u00b0 Y Y Ruby Plus 250m 40\u00b0/360\u00b0 Y ? Helios 32 150m 70\u00b0/360\u00b0 31\u00b0/360\u00b0 26\u00b0/360\u00b0 Y Y Helios 16 150m 30\u00b0/360\u00b0 Y ?

    Link to ROS 2 driver: https://github.com/RoboSense-LiDAR/rslidar_sdk

    Link to company website: https://www.robosense.ai/

    "},{"location":"reference-hw/lidars/#hesai-3d-lidar-sensors","title":"HESAI 3D LIDAR Sensors","text":"

    Hesai Lidars which has ROS 2 driver and tested by one or more community members are listed below:

    Supported Products List Range FOV (V), (H) ROS 2 Driver Autoware Tested (Y/N) Pandar 128 200m (+15\u00b0)/(-25\u00b0), (360\u00b0) Y - Pandar 64 200m (+15\u00b0)/(-25\u00b0), (360\u00b0) Y Y Pandar 40P 200m (+15\u00b0)/(-25\u00b0), (360\u00b0) Y Y QT 128 50m (-52.6\u00b0/+52.6\u00b0), (360\u00b0) Y Y QT 64 20m (-52.1\u00b0/+52.1\u00b0), (360\u00b0) Y Y AT128 200m (25.4\u00b0), (120\u00b0) Y Y XT32 120m (-16\u00b0/+15\u00b0), (360\u00b0) Y Y XT16 120m (-15\u00b0/+15\u00b0), (360\u00b0) Y - FT120 100m (75\u00b0), (100\u00b0) - - ET25 250m (25\u00b0), (120\u00b0) - -

    Link to ROS 2 drivers: https://github.com/tier4/nebula https://github.com/HesaiTechnology/HesaiLidar_General_ROS

    Link to company website: https://www.hesaitech.com/en/

    "},{"location":"reference-hw/lidars/#leishen-3d-lidar-sensors","title":"Leishen 3D LIDAR Sensors","text":"

    Leishen Lidars which has ROS 2 driver and tested by one or more community members are listed below:

    Supported Products List Range FOV (V), (H) ROS 2 Driver Autoware Tested (Y/N) LS C16 150m (+15\u00b0/-15\u00b0), (360\u00b0) Y - LS C32\u00a0 150m (+15\u00b0/-15\u00b0), (360\u00b0) Y - CH 32 120m (+3.7\u00b0/-6.7\u00b0),(120\u00b0) Y - CH 128 20m (+14\u00b0/-17\u00b0)/(150\u00b0) Y - C32W 160m (+15\u00b0/-55\u00b0), (360\u00b0) Y -

    Link to ROS 2 driver: https://github.com/leishen-lidar

    Link to company website: http://www.lslidar.com/

    "},{"location":"reference-hw/lidars/#livox-3d-lidar-sensors","title":"Livox 3D LIDAR Sensors","text":"

    Livox Lidars which has ROS 2 driver and tested by one or more community members are listed below:

    Supported Products List Range FOV (V), (H) ROS 2 Driver Autoware Tested (Y/N) Horizon 260m (81.7\u00b0), (25.1\u00b0) Y Y Mid-40 260m (38.4\u00b0), Circular Y - Mid-70 90m (70.4\u00b0), (77.2\u00b0) Y - Mid-100 260m (38.4\u00b0), (98.4\u00b0) Y - Mid-360 70m (+52\u00b0/-7\u00b0), (360\u00b0) Y - Avia 190m (70.4\u00b0), Circular Y - HAP 150m (25\u00b0), (120\u00b0) - - Tele-15 320m (16.2\u00b0), (14.5\u00b0) - -

    Link to ROS 2 driver: https://github.com/Livox-SDK/livox_ros2_driver

    Link to company website: https://www.livoxtech.com/

    "},{"location":"reference-hw/lidars/#ouster-3d-lidar-sensors","title":"Ouster 3D LIDAR Sensors","text":"

    Ouster Lidars which has ROS 2 driver and tested by one or more community members are listed below:

    Supported Products List Range FOV (V), (H) ROS 2 Driver Autoware Tested (Y/N) OSDome 45m (180\u00b0), (360\u00b0) Y - OS0 100m (90\u00b0), (360\u00b0) Y - OS1 200m (45\u00b0), (360\u00b0) Y - OS2 400m (22,5\u00b0), (360\u00b0) Y Y

    Link to ROS 2 driver: https://github.com/ros-drivers/ros2_ouster_drivers

    Link to company website: https://ouster.com/

    "},{"location":"reference-hw/radars/","title":"RADARs","text":""},{"location":"reference-hw/radars/#radars","title":"RADARs","text":""},{"location":"reference-hw/radars/#smartmicro-automotive-radars","title":"Smartmicro Automotive Radars","text":"

    Smartmicro Radars which has ROS 2 driver and tested by one or more community members are listed below:

    Supported Products List Range FOV (Azimuth), (Elevation) ROS 2 Driver Autoware Tested (Y/N) DRVEGRD 152 (Dual Mode Medium, Long) M: 0.33...66 m L: 0.9\u2026180 m (100\u00b0), (20\u00b0) Y - DRVEGRD 169 (Ultra-Short, Short, Medium, Long) US: 0.1\u20269.5 m S: 0.2\u202619 m M: 0.6...56 m L: 1.3...130 m US: (140\u00b0), (28\u00b0) S/M/L: (130\u00b0), (15\u00b0) Y - DRVEGRD 171 (Triple Mode Short, Medium Long) S: 0.2...40 m M: 0.5...100 m L: 1.2...240 m (100\u00b0), (20\u00b0) Y -

    Link to ROS 2 driver: https://github.com/smartmicro/smartmicro_ros2_radars

    Link to company website: https://www.smartmicro.com/automotive-radar

    "},{"location":"reference-hw/radars/#aptiv-automotive-radars","title":"Aptiv Automotive Radars","text":"

    Aptiv Radars which has ROS 2 driver and tested by one or more community members are listed below:

    Supported Products List Range FOV (Azimuth), (Elevation) ROS 2 Driver Autoware Tested (Y/N) Aptiv MMR (Dual Mode Short, Long) S: 1...40 m L: 3...160 m Short.: (90), (90\u00b0) Long: (90\u00b0), (90\u00b0) Y - Aptiv ESR 2.5 (Dual Mode (Medium, Long)) M: 1...60 m L: 1...175 m Med.: (90\u00b0), (4.4\u00b0) Long: (20\u00b0), (4.4\u00b0) Y -

    Link to company website: https://autonomoustuff.com/products

    "},{"location":"reference-hw/radars/#continental-engineering-radars","title":"Continental Engineering Radars","text":"

    Continental Engineering Radars which has ROS 2 driver and tested by one or more community members are listed below:

    Supported Products List Range FOV (Azimuth), (Elevation) ROS 2 Driver Autoware Tested (Y/N) ARS404 Near: 70m Far: 170m Near: (90\u00b0), (18\u00b0) Far: (18\u00b0), (18\u00b0) - - ARS408 Near: 20m Far: 250m Near: (120\u00b0), (20\u00b0) Far: (18\u00b0), (14\u00b0) - -

    Link to ROS 2 driver: https://github.com/tier4/ars408_driver

    Link to company website: https://conti-engineering.com/components/ars430/

    "},{"location":"reference-hw/remote_drive/","title":"Remote Drive","text":""},{"location":"reference-hw/remote_drive/#remote-drive","title":"Remote Drive","text":""},{"location":"reference-hw/remote_drive/#fort-robotics","title":"FORT ROBOTICS","text":"

    Fort Robotics remote control & E-stop devices which are used for autonomous driving and tested by one or more community members are listed below:

    Supported Products Op.Frequency Controller ROS 2 Support Autoware Tested (Y/N) Vehicle Safety Controller with E-stop 900 Mhz radio: up to 2km LOS2.4Ghz radio: up to 500m LOS IP 66 EnclosureBuilt-in emergency stop safety control(2) 2-axis joysticks(2) 1-axis finger sticks(8) buttons - -

    Link to company website: https://fortrobotics.com/vehicle-safety-controller/

    "},{"location":"reference-hw/remote_drive/#logitech","title":"LOGITECH","text":"

    Logitech joysticks which are used for autonomous driving and tested by one or more community members are listed below:

    Supported Products Op.Frequency Controller ROS 2 Support Autoware Tested (Y/N) Logitech F-710 2.4 GHz Wireless, 10m range (2) 2-axis joysticks (18) buttons Y Y

    Link to ROS driver: http://wiki.ros.org/joy

    Link to company website: https://www.logitechg.com/en-us/products/gamepads/f710-wireless-gamepad.html

    "},{"location":"reference-hw/thermal_cameras/","title":"Thermal CAMERAs","text":""},{"location":"reference-hw/thermal_cameras/#thermal-cameras","title":"Thermal CAMERAs","text":""},{"location":"reference-hw/thermal_cameras/#flir-thermal-automotive-dev-kit","title":"FLIR Thermal Automotive Dev. Kit","text":"

    FLIR ADK Thermal Vision cameras which has ROS 2 driver and tested by one or more community members are listed below:

    Supported Products List MP FPS Interface Spectral Band FOV ROS 2 Driver Autoware Tested (Y/N) FLIR ADK 640x512 30 USB-GMSL,Ethernet 8-14 um (LWIR) 75\u02da, 50\u02da, 32\u02da, and 24\u02da - -"},{"location":"reference-hw/vehicle_drive_by_wire_suppliers/","title":"Vehicle Drive By Wire Suppliers","text":""},{"location":"reference-hw/vehicle_drive_by_wire_suppliers/#vehicle-drive-by-wire-suppliers","title":"Vehicle Drive By Wire Suppliers","text":""},{"location":"reference-hw/vehicle_drive_by_wire_suppliers/#dataspeed-dbw-solutions","title":"Dataspeed DBW Solutions","text":"

    Dataspeed DBW Controllers which is used for autonomous driving and tested by one or more community members are listed below:

    Supported Vehicles Power Remote Control ROS 2 Support Autoware Tested (Y/N) Lincoln MKZ, NautilusFord Fusion, F150, Transit Connect, RangerChrysler PacificaJeep CherokeePolaris GEM, RZR, Lincoln Aviator, Jeep Grand Cherokee 12 Channel PDS,15 A Each at 12 V Optional, Available Y -

    Link to company website: https://www.dataspeedinc.com/

    "},{"location":"reference-hw/vehicle_drive_by_wire_suppliers/#astuff-pacmod-dbw-solutions","title":"AStuff Pacmod DBW Solutions","text":"

    Autonomous Stuff Pacmod DBW Controllers which is used for autonomous driving and tested by one or more community members are listed below:

    Supported Vehicles Power Remote Control ROS 2 Support Autoware Tested (Y/N) Polaris GEM SeriesPolaris eLXD MY 2016+Polaris Ranger X900International ProStarLexus RX-450h MYFord RangerToyota MinivanFord TransitHonda CR-V Power distribution panel Optional, Available Y Y

    Link to company website: https://autonomoustuff.com/platform/pacmod

    "},{"location":"reference-hw/vehicle_drive_by_wire_suppliers/#schaeffler-paravan-space-drive-dbw-solutions","title":"Schaeffler-Paravan Space Drive DBW Solutions","text":"

    Schaeffler-Paravan Space Drive DBW Controllers which is used for autonomous driving and tested by one or more community members are listed below:

    Supported Vehicles Power Remote Control ROS 2 Support Autoware Tested (Y/N) Custom Integration with Actuators - Optional, Available Y Y

    Link to company website: https://www.schaeffler-paravan.de/en/products/space-drive-system/

    "},{"location":"reference-hw/vehicle_platform_suppliers/","title":"Vehicle Platform Suppliers","text":""},{"location":"reference-hw/vehicle_platform_suppliers/#vehicle-platform-suppliers","title":"Vehicle Platform Suppliers","text":""},{"location":"reference-hw/vehicle_platform_suppliers/#pix-moving-autonomous-vehicle-solutions","title":"PIX MOVING Autonomous Vehicle Solutions","text":"

    PIX Moving AV solutions which is used for autonomous development and tested by one or more community members are listed below:

    Vehicle Types Sensors Integrated Autoware Installed ROS 2 Support Autoware Tested (Y/N) Electric DBW Chassis and Platforms Y Y Y -

    Link to company website: https://www.pixmoving.com/pixkit

    Different sizes of platforms

    "},{"location":"reference-hw/vehicle_platform_suppliers/#autonomoustuff-av-solutions","title":"Autonomoustuff AV Solutions","text":"

    Autonomoustuff platform solutions which is used for autonomous development and tested by one or more community members are listed below:

    Vehicle Types Sensors Integrated Autoware Installed ROS 2 Support Autoware Tested (Y/N) Road Vehicles, Golf Carts & Trucks Y Y Y -

    Link to company website: https://autonomoustuff.com/platform

    "},{"location":"support/","title":"Support","text":""},{"location":"support/#support","title":"Support","text":"

    This page explains several support resources.

    • Support guidelines pages explain the support mechanisms and guidelines.
    • Troubleshooting pages explain solutions for common issues.
    • Docs guide pages explain related documentation sites.
    "},{"location":"support/docs-guide/","title":"Docs guide","text":""},{"location":"support/docs-guide/#docs-guide","title":"Docs guide","text":"

    This page explains several documentation sites that are useful for Autoware and ROS development.

    • The Autoware Foundation is the official site of the Autoware Foundation. You can learn about the Autoware community here.
    • Autoware Documentation (this site) is the central documentation site for Autoware maintained by the Autoware community. General software-related information of Autoware is aggregated here.
    • Autoware Universe Documentation has READMEs and design documents of software components.
    • ROS Docs Guide explains the ROS 1 and ROS 2 documentation infrastructure.
    "},{"location":"support/support-guidelines/","title":"Support guidelines","text":""},{"location":"support/support-guidelines/#support-guidelines","title":"Support guidelines","text":"

    This page explains the support mechanisms we provide.

    Warning

    Before asking for help, search and read this documentation site carefully. Also, follow the discussion guidelines for discussions.

    Choose appropriate resources depending on what kind of help you need and read the detailed description in the sections below.

    • Documentation sites
      • Gathering information
    • GitHub Discussions
      • Questions or unconfirmed bugs -> Q&A
      • Feature requests
      • Design discussions
    • GitHub Issues
      • Confirmed bugs
      • Confirmed tasks
    • Discord
      • Instant messaging between contributors
    • ROS Discourse
      • General topics that should be widely announced
    "},{"location":"support/support-guidelines/#guidelines-for-autoware-community-support","title":"Guidelines for Autoware community support","text":"

    If you encounter a problem with Autoware, please follow these steps to seek help:

    "},{"location":"support/support-guidelines/#1-search-for-existing-issues-and-questions","title":"1. Search for existing Issues and Questions","text":"

    Before creating a new issue or question, check if someone else has already reported or asked about the problem. Use the following resources:

    • Issues

      Note that Autoware has multiple repositories listed in autoware.repos. It is recommended to search across all repositories.

    • Questions
    "},{"location":"support/support-guidelines/#2-create-a-new-question-thread","title":"2. Create a new question thread","text":"

    If you don't find an existing issue or question that addresses your problem, create a new question thread:

    • Ask a Question

      If your question is not answered within a week, mention @autoware-maintainers in a post to remind them.

    "},{"location":"support/support-guidelines/#3-participate-in-other-discussions","title":"3. Participate in other discussions","text":"

    You are also welcome to open or join discussions in other categories:

    • Feature requests
    • Design discussions
    "},{"location":"support/support-guidelines/#additional-resources","title":"Additional resources","text":"

    If you are unsure how to create a discussion, refer to the GitHub Docs on creating a new discussion.

    "},{"location":"support/support-guidelines/#documentation-sites","title":"Documentation sites","text":"

    Docs guide shows the list of useful documentation sites. Visit them and see if there is any information related to your problem.

    Note that the documentation sites aren't always up-to-date and perfect. If you find out that some information is wrong, unclear, or missing in Autoware docs, feel free to submit a pull request following the contribution guidelines.

    "},{"location":"support/support-guidelines/#github-discussions","title":"GitHub Discussions","text":"

    GitHub discussions page is the primary place for asking questions and discussing topics related to Autoware.

    Category Description Announcements Official updates and news from the Autoware maintainers Design Discussions on Autoware system and software design Feature requests Suggestions for new features and improvements General General discussions about Autoware Ideas Brainstorming and sharing innovative ideas Polls Community polls and surveys Q&A Questions and answers from the community and developers Show and tell Showcase of projects and achievements TSC meetings Minutes and discussions from TSC(Technical Steering Committee) meetings Working group activities Updates on working group activities Working group meetings Minutes and discussions from working group meetings

    Warning

    GitHub Discussions is not the right place to track tasks or bugs. Use GitHub Issues for that purpose.

    "},{"location":"support/support-guidelines/#github-issues","title":"GitHub Issues","text":"

    GitHub Issues is the designated platform for tracking confirmed bugs, tasks, and enhancements within Autoware's various repositories.

    Follow these guidelines to ensure efficient issue tracking and resolution:

    "},{"location":"support/support-guidelines/#reporting-bugs","title":"Reporting bugs","text":"

    If you encounter a confirmed bug, please report it by creating an issue in the appropriate Autoware repository. Include detailed information such as steps to reproduce, expected outcomes, and actual results to assist maintainers in addressing the issue promptly.

    "},{"location":"support/support-guidelines/#tracking-tasks","title":"Tracking tasks","text":"

    GitHub Issues is also the place for managing tasks including:

    • Refactoring: Propose refactoring existing code to improve efficiency, readability, or maintainability. Clearly describe what and why you propose to refactor.
    • New Features: If you have confirmed the need for a new feature through discussions, use Issues to track its development. Outline the feature's purpose, potential designs, and its intended impact.
    • Documentation: Propose changes to documentation to fix inaccuracies, update outdated content, or add new sections. Specify what changes are needed and why they are important.
    "},{"location":"support/support-guidelines/#creating-an-issue","title":"Creating an issue","text":"

    When creating a new issue, use the following guidelines:

    1. Choose the Correct Repository: If unsure which repository is appropriate, start a discussion in the Q&A category to seek guidance from maintainers.
    2. Use Clear, Concise Titles: Clearly summarize the issue or task in the title for quick identification.
    3. Provide Detailed Descriptions: Include all necessary details to understand the context and scope of the issue. Attach screenshots, error logs, and code snippets where applicable.
    4. Tag Relevant Contributors: Mention contributors or teams that might be impacted by or interested in the issue.
    "},{"location":"support/support-guidelines/#linking-issues-and-pull-requests","title":"Linking issues and pull requests","text":"

    When you start working on an issue, link the related pull request to the issue by mentioning the issue number. This helps maintain a clear and traceable development history.

    For more details, see the Pull Request Guidelines page.

    Warning

    GitHub Issues is not for questions or unconfirmed bugs. If an issue is created for such purposes, it will likely be transferred to GitHub Discussions for further clarification.

    "},{"location":"support/support-guidelines/#discord","title":"Discord","text":"

    Autoware has a Discord server for casual communication between contributors.

    The Autoware Discord server is a good place for the following activities:

    • Introduce yourself to the community.
    • Chat with contributors.
    • Take a quick straw poll.

    Note that it is not the right place to get help for your issues.

    "},{"location":"support/support-guidelines/#ros-discourse","title":"ROS Discourse","text":"

    If you want to widely discuss a topic with the general Autoware and ROS community or ask a question not related to Autoware's bugs, post to the Autoware category on ROS Discourse.

    Warning

    Do not post questions about bugs to ROS Discourse!

    "},{"location":"support/troubleshooting/","title":"Troubleshooting","text":""},{"location":"support/troubleshooting/#troubleshooting","title":"Troubleshooting","text":""},{"location":"support/troubleshooting/#setup-issues","title":"Setup issues","text":""},{"location":"support/troubleshooting/#cuda-related-errors","title":"CUDA-related errors","text":"

    When installing CUDA, errors may occur because of version conflicts. To resolve these types of errors, try one of the following methods:

    • Unhold all CUDA-related libraries and rerun the setup script.

      sudo apt-mark unhold  \\\n\"cuda*\"             \\\n\"libcudnn*\"         \\\n\"libnvinfer*\"       \\\n\"libnvonnxparsers*\" \\\n\"libnvparsers*\"     \\\n\"tensorrt*\"         \\\n\"nvidia*\"\n\n./setup-dev-env.sh\n
    • Uninstall all CUDA-related libraries and rerun the setup script.

      sudo apt purge        \\\n\"cuda*\"             \\\n\"libcudnn*\"         \\\n\"libnvinfer*\"       \\\n\"libnvonnxparsers*\" \\\n\"libnvparsers*\"     \\\n\"tensorrt*\"         \\\n\"nvidia*\"\n\nsudo apt autoremove\n\n./setup-dev-env.sh\n

    Warning

    Note that this may break your system and run carefully.

    • Run the setup script without installing CUDA-related libraries.

      ./setup-dev-env.sh --no-nvidia\n

    Warning

    Note that some components in Autoware Universe require CUDA, and only the CUDA version in the env file is supported at this time. Autoware may work with other CUDA versions, but those versions are not supported and functionality is not guaranteed.

    "},{"location":"support/troubleshooting/#build-issues","title":"Build issues","text":""},{"location":"support/troubleshooting/#insufficient-memory","title":"Insufficient memory","text":"

    Building Autoware requires a lot of memory, and your machine can freeze or crash if memory runs out during a build. To avoid this problem, 16-32GB of swap should be configured.

    # Optional: Check the current swapfile\nfree -h\n\n# Remove the current swapfile\nsudo swapoff /swapfile\nsudo rm /swapfile\n\n# Create a new swapfile\nsudo fallocate -l 32G /swapfile\nsudo chmod 600 /swapfile\nsudo mkswap /swapfile\nsudo swapon /swapfile\n\n# Optional: Check if the change is reflected\nfree -h\n

    For more detailed configuration steps, along with an explanation of swap, refer to Digital Ocean's \"How To Add Swap Space on Ubuntu 20.04\" tutorial

    If there are too many CPU cores (more than 64) in your machine, it might requires larger memory. A workaround here is to limit the job number while building.

    MAKEFLAGS=\"-j4\" colcon build --symlink-install --cmake-args -DCMAKE_BUILD_TYPE=Release\n

    You can adjust -j4 to any number based on your system. For more details, see the manual page of GNU make.

    By reducing the number of packages built in parallel, you can also reduce the amount of memory used. In the following example, the number of packages built in parallel is set to 1, and the number of jobs used by make is limited to 1.

    MAKEFLAGS=\"-j1\" colcon build --symlink-install --cmake-args -DCMAKE_BUILD_TYPE=Release --parallel-workers 1\n

    Note

    By lowering both the number of packages built in parallel and the number of jobs used by make, you can reduce the memory usage. However, this also means that the build process takes longer.

    "},{"location":"support/troubleshooting/#errors-when-using-the-latest-version-of-autoware","title":"Errors when using the latest version of Autoware","text":"

    If you are working with the latest version of Autoware, issues can occur due to out-of-date software or old build files.

    To resolve these types of problems, first try cleaning your build artifacts and rebuilding:

    rm -rf build/ install/ log/\ncolcon build --symlink-install --cmake-args -DCMAKE_BUILD_TYPE=Release\n

    If the error is not resolved, remove src/ and update your workspace according to installation type (Docker / source).

    Warning

    Before removing src/, confirm that there are no modifications in your local environment that you want to keep!

    If errors still persist after trying the steps above, delete the entire workspace, clone the repository once again and restart the installation process.

    rm -rf autoware/\ngit clone https://github.com/autowarefoundation/autoware.git\n
    "},{"location":"support/troubleshooting/#errors-when-using-a-fixed-version-of-autoware","title":"Errors when using a fixed version of Autoware","text":"

    In principle, errors should not occur when using a fixed version. That said, possible causes include:

    • ROS 2 has been updated with breaking changes.
      • For confirmation, check the Packaging and Release Management tag on ROS Discourse.
    • Your local environment is broken.
      • Confirm your .bashrc file, environment variables, and library versions.

    In addition to the causes listed above, there are two common misunderstandings around the use of fixed versions.

    1. You used a fixed version for autowarefoundation/autoware only. All of the repository versions in the .repos file must be specified in order to use a completely fixed version.

    2. You didn't update the workspace after changing the branch of autowarefoundation/autoware. Changing the branch of autowarefoundation/autoware does not affect the files under src/. You have to run the vcs import command to update them.

    "},{"location":"support/troubleshooting/#error-when-building-python-package","title":"Error when building python package","text":"

    During building the following issue can occurs

    pkg_resources.extern.packaging.version.InvalidVersion: Invalid version: '0.23ubuntu1'\n

    The error is due to the fact that for versions between 66.0.0 and 67.5.0 setuptools enforces the python packages to be PEP-440 conformant. Since version 67.5.1 setuptools has a fallback that makes it possible to work with old packages again.

    The solution is to update setuptools to the newest version with the following command

    pip install --upgrade setuptools\n
    "},{"location":"support/troubleshooting/#dockerrocker-issues","title":"Docker/rocker issues","text":"

    If any errors occur when running Autoware with Docker or rocker, first confirm that your Docker installation is working correctly by running the following commands:

    docker run --rm -it hello-world\ndocker run --rm -it ubuntu:latest\n

    Next, confirm that you are able to access the base Autoware image that is stored on the GitHub Packages website

    docker run --rm -it ghcr.io/autowarefoundation/autoware-universe:latest\n
    "},{"location":"support/troubleshooting/#runtime-issues","title":"Runtime issues","text":""},{"location":"support/troubleshooting/#performance-related-issues","title":"Performance related issues","text":"

    Symptoms:

    • Autoware is running slower than expected
    • Messages show up late in RViz2
    • Point clouds are lagging
    • Camera images are lagging behind
    • Point clouds or markers flicker on RViz2
    • When multiple subscribers use the same publishers, the message rate drops

    If you have any of these symptoms, please the Performance Troubleshooting page.

    "},{"location":"support/troubleshooting/#map-does-not-display-when-running-the-planning-simulator","title":"Map does not display when running the Planning Simulator","text":"

    When running the Planning Simulator, the most common reason for the map not being displayed in RViz is because the map path has not been specified correctly in the launch command. You can confirm if this is the case by searching for Could not find lanelet map under {path-to-map-dir}/lanelet2_map.osm errors in the log.

    Another possible reason is that map loading is taking a long time due to poor DDS performance. For this, please visit the Performance Troubleshooting page.

    "},{"location":"support/troubleshooting/#died-process-issues","title":"Died process issues","text":"

    Some modules may not be launched properly at runtime, and you may see \"process has died\" in your terminal. You can use the gdb tool to locate where the problem is occurring.

    Debug build the module you wish to analyze under your autoware workspace

    colcon build --cmake-args -DCMAKE_BUILD_TYPE=Debug --packages-up-to <the modules you wish to analyze> --catkin-skip-building-tests --symlink-install\n

    In this state, when a died process occurs when you run the autoware again, a core file will be created. Remeber to remove the size limit of the core file.

    ulimit -c unlimited\n

    Rename the core file as core.<PID>.

    echo core | sudo tee /proc/sys/kernel/core_pattern\necho -n 1 | sudo tee /proc/sys/kernel/core_uses_pid\n

    Launch the autoware again. When a died process occurs, a core file will be created. ll -ht helps you to check if it was created.

    Invoke gdb tool.

    gdb <executable file> <core file>\n#You can find the `<executable file>` in the error message.\n

    bt backtraces the stack of callbacks where a process dies. f <frame number> shows you the detail of a frame, and l shows you the code.

    "},{"location":"support/troubleshooting/performance-troubleshooting/","title":"Performance Troubleshooting","text":""},{"location":"support/troubleshooting/performance-troubleshooting/#performance-troubleshooting","title":"Performance Troubleshooting","text":"

    Overall symptoms:

    • Autoware is running slower than expected
    • Messages show up late in RViz2
    • Point clouds are lagging
    • Camera images are lagging behind
    • Point clouds or markers flicker on RViz2
    • When multiple subscribers use the same publishers, the message rate drops
    "},{"location":"support/troubleshooting/performance-troubleshooting/#diagnostic-steps","title":"Diagnostic Steps","text":""},{"location":"support/troubleshooting/performance-troubleshooting/#check-if-multicast-is-enabled","title":"Check if multicast is enabled","text":""},{"location":"support/troubleshooting/performance-troubleshooting/#target-symptoms","title":"Target symptoms","text":"
    • When multiple subscribers use the same publishers, the message rate drops
    "},{"location":"support/troubleshooting/performance-troubleshooting/#diagnosis","title":"Diagnosis","text":"

    Make sure that the multicast is enabled for your interface.

    For example when you run following:

    source /opt/ros/humble/setup.bash\nros2 run demo_nodes_cpp talker\n

    If you get the error message selected interface \"{your-interface-name}\" is not multicast-capable: disabling multicast, this should be fixed.

    "},{"location":"support/troubleshooting/performance-troubleshooting/#solution","title":"Solution","text":"

    Follow DDS settings for ROS 2 and Autoware

    Especially the Enable multicast on lo section.

    "},{"location":"support/troubleshooting/performance-troubleshooting/#check-the-compilation-flags","title":"Check the compilation flags","text":""},{"location":"support/troubleshooting/performance-troubleshooting/#target-symptoms_1","title":"Target symptoms","text":"
    • Autoware is running slower than expected
    • Point clouds are lagging
    • When multiple subscribers use the same publishers, the message rate drops even further
    "},{"location":"support/troubleshooting/performance-troubleshooting/#diagnosis_1","title":"Diagnosis","text":"

    Check the ~/.bash_history file to see if there are any colcon build directives without -DCMAKE_BUILD_TYPE=Release or -DCMAKE_BUILD_TYPE=RelWithDebInfo flags at all.

    Even if a build starts with these flags but same workspace gets compiled without these flags, it will still be a slow build in the end.

    In addition, the nodes will run slow in general, especially the pointcloud_preprocessor nodes.

    Example issue: issue2597

    "},{"location":"support/troubleshooting/performance-troubleshooting/#solution_1","title":"Solution","text":"
    • Remove the build, install and optionally log folders in the main autoware folder.
    • Compile the Autoware with either Release or RelWithDebInfo tags:

      colcon build --symlink-install --cmake-args -DCMAKE_BUILD_TYPE=Release\n# Or build with debug flags too (comparable performance but you can debug too)\ncolcon build --symlink-install --cmake-args -DCMAKE_BUILD_TYPE=RelWithDebInfo\n
    "},{"location":"support/troubleshooting/performance-troubleshooting/#check-the-dds-settings","title":"Check the DDS settings","text":""},{"location":"support/troubleshooting/performance-troubleshooting/#target-symptoms_2","title":"Target symptoms","text":"
    • Autoware is running slower than expected
    • Messages show up late in RViz2
    • Point clouds are lagging
    • Camera images are lagging behind
    • When multiple subscribers use the same publishers, the message rate drops
    "},{"location":"support/troubleshooting/performance-troubleshooting/#check-the-rmw-ros-middleware-implementation","title":"Check the RMW (ROS Middleware) implementation","text":""},{"location":"support/troubleshooting/performance-troubleshooting/#diagnosis_2","title":"Diagnosis","text":"

    Run following to check the middleware used:

    echo $RMW_IMPLEMENTATION\n

    The return line should be rmw_cyclonedds_cpp. If not, apply the solution.

    If you are using a different DDS middleware, we might not have official support for it just yet.

    "},{"location":"support/troubleshooting/performance-troubleshooting/#solution_2","title":"Solution","text":"

    Add export RMW_IMPLEMENTATION=rmw_cyclonedds_cpp as a separate line in you ~/.bashrc file.

    More details in: CycloneDDS Configuration

    "},{"location":"support/troubleshooting/performance-troubleshooting/#check-if-the-cyclonedds-is-configured-correctly","title":"Check if the CycloneDDS is configured correctly","text":""},{"location":"support/troubleshooting/performance-troubleshooting/#diagnosis_3","title":"Diagnosis","text":"

    Run following to check the configuration .xml file of the CycloneDDS:

    echo $CYCLONEDDS_URI\n

    The return line should be a valid path pointing to an .xml file with CycloneDDS configuration.

    Also check if the file is configured correctly:

    cat ${CYCLONEDDS_URI#file://}\n

    This should print the .xml file on the terminal.

    "},{"location":"support/troubleshooting/performance-troubleshooting/#solution_3","title":"Solution","text":"

    Follow CycloneDDS Configuration and make sure:

    • you have export CYCLONEDDS_URI=file:///absolute_path_to_your/cyclonedds.xml as a line on your ~/.bashrc file.
    • you have the cyclonedds.xml with the configuration provided in the documentation.
    "},{"location":"support/troubleshooting/performance-troubleshooting/#check-the-linux-kernel-maximum-buffer-size","title":"Check the Linux kernel maximum buffer size","text":""},{"location":"support/troubleshooting/performance-troubleshooting/#diagnosis_4","title":"Diagnosis","text":"

    Validate the sysctl settings

    "},{"location":"support/troubleshooting/performance-troubleshooting/#solution_4","title":"Solution","text":"

    Tune system-wide network settings

    "},{"location":"support/troubleshooting/performance-troubleshooting/#check-if-localhost-only-communication-for-dds-is-enabled","title":"Check if localhost only communication for DDS is enabled","text":"
    • If you are using multi computer setup, please skip this check.
    • Enabling localhost only communication for DDS can help improve the performance of ROS by reducing network traffic and avoiding potential conflicts with other devices on the network.
    "},{"location":"support/troubleshooting/performance-troubleshooting/#target-symptoms_3","title":"Target symptoms","text":"
    • You see topics that shouldn't exist
    • You see point clouds that don't belong to your machine
      • They might be from another computer running ROS 2 on your network
    • Point clouds or markers flicker on RViz2
      • Another publisher (on another machine) may be publishing on the same topic as your node does.
      • Causing the flickering.
    "},{"location":"support/troubleshooting/performance-troubleshooting/#diagnosis_5","title":"Diagnosis","text":"

    Run:

    cat ${CYCLONEDDS_URI#file://}\n

    And it should return DDS settings for ROS 2 and Autoware: CycloneDDS Configuration this file.

    "},{"location":"support/troubleshooting/performance-troubleshooting/#solution_5","title":"Solution","text":"

    Follow DDS settings for ROS 2 and Autoware: Enable localhost-only communication.

    Also make sure the following returns an empty line:

    echo $ROS_LOCALHOST_ONLY\n
    "},{"location":"tutorials/","title":"Tutorials","text":""},{"location":"tutorials/#tutorials","title":"Tutorials","text":""},{"location":"tutorials/#simulation-tutorials","title":"Simulation tutorials","text":"

    Simulations provide a way of verifying Autoware's functionality before field testing with an actual vehicle. There are three main types of simulation that can be run ad hoc or via a scenario runner.

    "},{"location":"tutorials/#simulation-methods","title":"Simulation methods","text":""},{"location":"tutorials/#ad-hoc-simulation","title":"Ad hoc simulation","text":"

    Ad hoc simulation is a flexible method for running basic simulations on your local machine, and is the recommended method for anyone new to Autoware.

    "},{"location":"tutorials/#scenario-simulation","title":"Scenario simulation","text":"

    Scenario simulation uses a scenario runner to run more complex simulations based on predefined scenarios. It is often run automatically for continuous integration purposes, but can also be run on a local machine.

    "},{"location":"tutorials/#simulation-types","title":"Simulation types","text":""},{"location":"tutorials/#planning-simulation","title":"Planning simulation","text":"

    Planning simulation uses simple dummy data to test the Planning and Control components - specifically path generation, path following and obstacle avoidance. It verifies that a vehicle can reach a goal destination while avoiding pedestrians and surrounding cars, and is another method for verifying the validity of Lanelet2 maps. It also allows for testing of traffic light handling.

    "},{"location":"tutorials/#how-does-planning-simulation-work","title":"How does planning simulation work?","text":"
    1. Generate a path to the goal destination
    2. Control the car along the generated path
    3. Detect and avoid any humans or other vehicles on the way to the goal destination
    "},{"location":"tutorials/#rosbag-replay-simulation","title":"Rosbag replay simulation","text":"

    Rosbag replay simulation uses prerecorded rosbag data to test the following aspects of the Localization and Perception components:

    • Localization: Estimation of the vehicle's location on the map by matching sensor and vehicle feedback data to the map.
    • Perception: Using sensor data to detect, track and predict dynamic objects such as surrounding cars, pedestrians, and other objects

    By repeatedly playing back the data, this simulation type can also be used for endurance testing.

    "},{"location":"tutorials/#digital-twin-simulation","title":"Digital twin simulation","text":"

    Digital twin simulation is a simulation type that is able to produce realistic data and simulate almost the entire system. It is also commonly referred to as end-to-end simulation.

    "},{"location":"tutorials/#evaluation-tutorials","title":"Evaluation Tutorials","text":""},{"location":"tutorials/#components-evaluation","title":"Components Evaluation","text":"

    Components evaluation tutorials provide a way to evaluate the performance of Autoware's components in a controlled environment.

    "},{"location":"tutorials/#localization-evaluation","title":"Localization Evaluation","text":"

    The Localization Evaluation tutorial provides a way to evaluate the performance of the Localization component.

    "},{"location":"tutorials/#urban-environment-evaluation","title":"Urban Environment Evaluation","text":"

    The Urban Environment Evaluation tutorial provides a way to evaluate the performance of the Localization component in urban environment. It uses the Istanbul Open Dataset for testing. Test data, map, test steps are included. Test results are also shared.

    "},{"location":"tutorials/ad-hoc-simulation/","title":"Ad hoc simulation","text":""},{"location":"tutorials/ad-hoc-simulation/#ad-hoc-simulation","title":"Ad hoc simulation","text":"

    Warning

    Under Construction

    "},{"location":"tutorials/ad-hoc-simulation/planning-simulation/","title":"Planning simulation","text":""},{"location":"tutorials/ad-hoc-simulation/planning-simulation/#planning-simulation","title":"Planning simulation","text":""},{"location":"tutorials/ad-hoc-simulation/planning-simulation/#preparation","title":"Preparation","text":"

    Download and unpack a sample map.

    • You can also download the map manually.
    gdown -O ~/autoware_map/ 'https://docs.google.com/uc?export=download&id=1499_nsbUbIeturZaDj7jhUownh5fvXHd'\nunzip -d ~/autoware_map ~/autoware_map/sample-map-planning.zip\n

    Note

    Sample map: Copyright 2020 TIER IV, Inc.

    Check if you have ~/autoware_data folder and files in it.

    $ cd ~/autoware_data\n$ ls -C -w 30\nimage_projection_based_fusion\nlidar_apollo_instance_segmentation\nlidar_centerpoint\ntensorrt_yolo\ntensorrt_yolox\ntraffic_light_classifier\ntraffic_light_fine_detector\ntraffic_light_ssd_fine_detector\nyabloc_pose_initializer\n

    If not, please, follow Manual downloading of artifacts.

    Change the maximum velocity, that is 15km/h by default.

    "},{"location":"tutorials/ad-hoc-simulation/planning-simulation/#basic-simulations","title":"Basic simulations","text":"

    Using Autoware Launch GUI

    If you prefer a graphical user interface (GUI) over the command line for launching and managing your simulations, refer to the Using Autoware Launch GUI section at the end of this document for a step-by-step guide.

    "},{"location":"tutorials/ad-hoc-simulation/planning-simulation/#lane-driving-scenario","title":"Lane driving scenario","text":""},{"location":"tutorials/ad-hoc-simulation/planning-simulation/#1-launch-autoware","title":"1. Launch Autoware","text":"
    source ~/autoware/install/setup.bash\nros2 launch autoware_launch planning_simulator.launch.xml map_path:=$HOME/autoware_map/sample-map-planning vehicle_model:=sample_vehicle sensor_model:=sample_sensor_kit\n

    Warning

    Note that you cannot use ~ instead of $HOME here.

    If ~ is used, the map will fail to load.

    "},{"location":"tutorials/ad-hoc-simulation/planning-simulation/#2-set-an-initial-pose-for-the-ego-vehicle","title":"2. Set an initial pose for the ego vehicle","text":"

    a) Click the 2D Pose estimate button in the toolbar, or hit the P key.

    b) In the 3D View pane, click and hold the left-mouse button, and then drag to set the direction for the initial pose. An image representing the vehicle should now be displayed.

    Warning

    Remember to set the initial pose of the car in the same direction as the lane.

    To confirm the direction of the lane, check the arrowheads displayed on the map.

    "},{"location":"tutorials/ad-hoc-simulation/planning-simulation/#3-set-a-goal-pose-for-the-ego-vehicle","title":"3. Set a goal pose for the ego vehicle","text":"

    a) Click the 2D Goal Pose button in the toolbar, or hit the G key.

    b) In the 3D View pane, click and hold the left-mouse button, and then drag to set the direction for the goal pose. If done correctly, you will see a planned path from initial pose to goal pose.

    "},{"location":"tutorials/ad-hoc-simulation/planning-simulation/#4-start-the-ego-vehicle","title":"4. Start the ego vehicle","text":"

    Now you can start the ego vehicle driving by clicking the AUTO button on OperationMode in AutowareStatePanel. Alteratively, you can manually start the vehicle by running the following command:

    source ~/autoware/install/setup.bash\nros2 service call /api/operation_mode/change_to_autonomous autoware_adapi_v1_msgs/srv/ChangeOperationMode {}\n

    After that, you can see AUTONOMOUS sign on OperationMode and AUTO button is grayed out.

    "},{"location":"tutorials/ad-hoc-simulation/planning-simulation/#parking-scenario","title":"Parking scenario","text":"
    1. Set an initial pose and a goal pose, and engage the ego vehicle.

    2. When the vehicle approaches the goal, it will switch from lane driving mode to parking mode.

    3. After that, the vehicle will reverse into the destination parking spot.

    "},{"location":"tutorials/ad-hoc-simulation/planning-simulation/#pull-out-and-pull-over-scenario","title":"Pull out and pull over scenario","text":"
    1. In a pull out scenario, set the ego vehicle at the road shoulder.

    2. Set a goal and then engage the ego vehicle.

    3. In a pull over scenario, similarly set the ego vehicle in a lane and set a goal on the road shoulder.

    "},{"location":"tutorials/ad-hoc-simulation/planning-simulation/#lane-change-scenario","title":"Lane change scenario","text":"
    1. Download and unpack Nishishinjuku map.

      gdown -O ~/autoware_map/ 'https://github.com/tier4/AWSIM/releases/download/v1.1.0/nishishinjuku_autoware_map.zip'\nunzip -d ~/autoware_map ~/autoware_map/nishishinjuku_autoware_map.zip\n
    2. Launch autoware with Nishishinjuku map with following command:

      source ~/autoware/install/setup.bash\nros2 launch autoware_launch planning_simulator.launch.xml map_path:=$HOME/autoware_map/nishishinjuku_autoware_map vehicle_model:=sample_vehicle sensor_model:=sample_sensor_kit\n

    3. Set an initial pose and a goal pose in adjacent lanes.

    4. Engage the ego vehicle. It will make a lane change along the planned path.

    "},{"location":"tutorials/ad-hoc-simulation/planning-simulation/#avoidance-scenario","title":"Avoidance scenario","text":"
    1. Set an initial pose and a goal pose in the same lane. A path will be planned.

    2. Set a \"2D Dummy Bus\" on the roadside. A new path will be planned.

    3. Engage the ego vehicle. It will avoid the obstacle along the newly planned path.

    "},{"location":"tutorials/ad-hoc-simulation/planning-simulation/#advanced-simulations","title":"Advanced Simulations","text":""},{"location":"tutorials/ad-hoc-simulation/planning-simulation/#placing-dummy-objects","title":"Placing dummy objects","text":"
    1. Click the 2D Dummy Car or 2D Dummy Pedestrian button in the toolbar.
    2. Set the pose of the dummy object by clicking and dragging on the map.
    3. Set the velocity of the object in Tool Properties -> 2D Dummy Car/Pedestrian panel.

      !!! note

      Changes to the velocity parameter will only affect objects placed after the parameter is changed.

    4. Delete any dummy objects placed in the view by clicking the Delete All Objects button in the toolbar.

    5. Click the Interactive button in the toolbar to make the dummy object interactive.

    6. For adding an interactive dummy object, press SHIFT and click the right click.

    7. For deleting an interactive dummy object, press ALT and click the right click.
    8. For moving an interactive dummy object, hold the right click drag and drop the object.

    "},{"location":"tutorials/ad-hoc-simulation/planning-simulation/#traffic-light-recognition-simulation","title":"Traffic light recognition simulation","text":"

    By default, traffic lights on the map are all treated as if they are set to green. As a result, when a path is created that passed through an intersection with a traffic light, the ego vehicle will drive through the intersection without stopping.

    The following steps explain how to set and reset traffic lights in order to test how the Planning component will respond.

    "},{"location":"tutorials/ad-hoc-simulation/planning-simulation/#set-traffic-light","title":"Set traffic light","text":"
    1. Go to Panels -> Add new panel, select TrafficLightPublishPanel, and then press OK.

    2. In TrafficLightPublishPanel, set the ID and color of the traffic light.

    3. Click the SET button.

    4. Finally, click the PUBLISH button to send the traffic light status to the simulator. Any planned path that goes past the selected traffic light will then change accordingly.

    By default, Rviz should display the ID of each traffic light on the map. You can have a closer look at the IDs by zooming in the region or by changing the View type.

    In case the IDs are not displayed, try the following troubleshooting steps:

    a) In the Displays panel, find the traffic_light_id topic by toggling the triangle icons next to Map > Lanelet2VectorMap > Namespaces.

    b) Check the traffic_light_id checkbox.

    c) Reload the topic by clicking the Map checkbox twice.

    "},{"location":"tutorials/ad-hoc-simulation/planning-simulation/#updatereset-traffic-light","title":"Update/Reset traffic light","text":"

    You can update the color of the traffic light by selecting the next color (in the image it is GREEN) and clicking SET button. In the image the traffic light in front of the ego vehicle changed from RED to GREEN and the vehicle restarted.

    To remove a traffic light from TrafficLightPublishPanel, click the RESET button.

    Reference video tutorials

    "},{"location":"tutorials/ad-hoc-simulation/planning-simulation/#using-autoware-launch-gui","title":"Using Autoware Launch GUI","text":"

    This section provides a step-by-step guide on using the Autoware Launch GUI for planning simulations, offering an alternative to the command-line instructions provided in the Basic simulations section.

    "},{"location":"tutorials/ad-hoc-simulation/planning-simulation/#getting-started-with-autoware-launch-gui","title":"Getting Started with Autoware Launch GUI","text":"
    1. Installation: Ensure you have installed the Autoware Launch GUI. Installation instructions.

    2. Launching the GUI: Open the Autoware Launch GUI from your applications menu.

    "},{"location":"tutorials/ad-hoc-simulation/planning-simulation/#launching-a-planning-simulation","title":"Launching a Planning Simulation","text":""},{"location":"tutorials/ad-hoc-simulation/planning-simulation/#lane-driving-scenario_1","title":"Lane Driving Scenario","text":"
    1. Set Autoware Path: In the GUI, set the path to your Autoware installation.

    2. Select Launch File: Choose planning_simulator.launch.xml for the lane driving scenario.

    3. Customize Parameters: Adjust parameters such as map_path, vehicle_model, and sensor_model as needed.

    4. Start Simulation: Click the launch button to start the simulation.

    5. Any Scenario: From here, you can follow the instructions in the

    • Lane driving scenario: Lane Driving Scenario
    • Parking scenario: Parking scenario
    • Lane change scenario: Lane change scenario
    • Avoidance scenario: Avoidance scenario
    • Advanced Simulations: Advanced Simulations
    "},{"location":"tutorials/ad-hoc-simulation/planning-simulation/#monitoring-and-managing-the-simulation","title":"Monitoring and Managing the Simulation","text":"
    • Real-Time Monitoring: Use the GUI to monitor CPU/Memory usage and Autoware logs in real-time.
    • Profile Management: Save your simulation profiles for quick access in future simulations.
    • Adjusting Parameters: Easily modify simulation parameters on-the-fly through the GUI.
    "},{"location":"tutorials/ad-hoc-simulation/planning-simulation/#want-to-try-autoware-with-your-custom-map","title":"Want to Try Autoware with Your Custom Map?","text":"

    The above content describes the process for conducting some operations in the planning simulator using a sample map. If you are interested in running Autoware with maps of your own environment, please visit the How to Create Vector Map section for guidance.

    "},{"location":"tutorials/ad-hoc-simulation/rosbag-replay-simulation/","title":"Rosbag replay simulation","text":""},{"location":"tutorials/ad-hoc-simulation/rosbag-replay-simulation/#rosbag-replay-simulation","title":"Rosbag replay simulation","text":""},{"location":"tutorials/ad-hoc-simulation/rosbag-replay-simulation/#steps","title":"Steps","text":"
    1. Download and unpack a sample map.

      • You can also download the map manually.
      gdown -O ~/autoware_map/ 'https://docs.google.com/uc?export=download&id=1A-8BvYRX3DhSzkAnOcGWFw5T30xTlwZI'\nunzip -d ~/autoware_map/ ~/autoware_map/sample-map-rosbag.zip\n
    2. Download the sample rosbag files.

      • You can also download the rosbag files manually.
      gdown -O ~/autoware_map/ 'https://docs.google.com/uc?export=download&id=1sU5wbxlXAfHIksuHjP3PyI2UVED8lZkP'\nunzip -d ~/autoware_map/ ~/autoware_map/sample-rosbag.zip\n
    3. Check if you have ~/autoware_data folder and files in it.

      $ cd ~/autoware_data\n$ ls -C -w 30\nimage_projection_based_fusion\nlidar_apollo_instance_segmentation\nlidar_centerpoint\ntensorrt_yolo\ntensorrt_yolox\ntraffic_light_classifier\ntraffic_light_fine_detector\ntraffic_light_ssd_fine_detector\nyabloc_pose_initializer\n

      If not, please, follow Manual downloading of artifacts.

    "},{"location":"tutorials/ad-hoc-simulation/rosbag-replay-simulation/#note","title":"Note","text":"
    • Sample map and rosbag: Copyright 2020 TIER IV, Inc.
    • Due to privacy concerns, the rosbag does not contain image data, which will cause:
      • Traffic light recognition functionality cannot be tested with this sample rosbag.
      • Object detection accuracy is decreased.
    "},{"location":"tutorials/ad-hoc-simulation/rosbag-replay-simulation/#how-to-run-a-rosbag-replay-simulation","title":"How to run a rosbag replay simulation","text":"

    Using Autoware Launch GUI

    If you prefer a graphical user interface (GUI) over the command line for launching and managing your simulations, refer to the Using Autoware Launch GUI section at the end of this document for a step-by-step guide.

    1. Launch Autoware.

      source ~/autoware/install/setup.bash\nros2 launch autoware_launch logging_simulator.launch.xml map_path:=$HOME/autoware_map/sample-map-rosbag vehicle_model:=sample_vehicle sensor_model:=sample_sensor_kit\n

      Note that you cannot use ~ instead of $HOME here.

      \u26a0\ufe0f You might encounter error and warning messages in the terminal before playing the rosbag. This is normal behavior. These should cease once the rosbag is played and proper initialization takes place

    2. Play the sample rosbag file.

      source ~/autoware/install/setup.bash\nros2 bag play ~/autoware_map/sample-rosbag/ -r 0.2 -s sqlite3\n

      \u26a0\ufe0f Due to the discrepancy between the timestamp in the rosbag and the current system timestamp, Autoware may generate warning messages in the terminal alerting to this mismatch. This is normal behavior.

    3. To focus the view on the ego vehicle, change the Target Frame in the RViz Views panel from viewer to base_link.

    4. To switch the view to Third Person Follower etc, change the Type in the RViz Views panel.

    Reference video tutorials

    "},{"location":"tutorials/ad-hoc-simulation/rosbag-replay-simulation/#using-autoware-launch-gui","title":"Using Autoware Launch GUI","text":"

    This section provides a step-by-step guide for using the Autoware Launch GUI to launch and manage your rosbag replay simulation. offering an alternative to the command-line instructions provided in the previous section.

    "},{"location":"tutorials/ad-hoc-simulation/rosbag-replay-simulation/#getting-started-with-autoware-launch-gui","title":"Getting Started with Autoware Launch GUI","text":"
    1. Installation: Ensure you have installed the Autoware Launch GUI. Installation instructions.

    2. Launching the GUI: Open the Autoware Launch GUI from your applications menu.

    "},{"location":"tutorials/ad-hoc-simulation/rosbag-replay-simulation/#launching-a-logging-simulation","title":"Launching a Logging Simulation","text":"
    1. Set Autoware Path: In the GUI, set the path to your Autoware installation.
    2. Select Launch File: Choose logging_simulator.launch.xml for the lane driving scenario.
    3. Customize Parameters: Adjust parameters such as map_path, vehicle_model, and sensor_model as needed.

    4. Start Simulation: Click the launch button to start the simulation and have access to all the logs.

    5. Play Rosbag: Move to the Rosbag tab and select the rosbag file you wish to play.

    6. Adjust Playback Speed: Adjust the playback speed as needed and any other parameters you wish to customize.

    7. Start Playback: Click the play button to start the rosbag playback and have access to settings such as pause/play, stop, and speed slider5.

    8. View Simulation: Move to the RViz window to view the simulation.

    9. To focus the view on the ego vehicle, change the Target Frame in the RViz Views panel from viewer to base_link.

    10. To switch the view to Third Person Follower etc, change the Type in the RViz Views panel.

    "},{"location":"tutorials/ad-hoc-simulation/digital-twin-simulation/MORAI_Sim-tutorial/","title":"MORAI Sim: Drive","text":""},{"location":"tutorials/ad-hoc-simulation/digital-twin-simulation/MORAI_Sim-tutorial/#morai-sim-drive","title":"MORAI Sim: Drive","text":"

    Note

    Any kind of for-profit activity with the trial version of the MORAI SIM:Drive is strictly prohibited.

    "},{"location":"tutorials/ad-hoc-simulation/digital-twin-simulation/MORAI_Sim-tutorial/#hardware-requirements","title":"Hardware requirements","text":"Minimum PC Specs OS Windows 10, Ubuntu 20.04, Ubuntu 18.04, Ubuntu 16.04 CPU Intel i5-9600KF or AMD Ryzen 5 3500X RAM DDR4 16GB GPU RTX2060 Super Required PC Specs OS Windows 10, Ubuntu 20.04, Ubuntu 18.04, Ubuntu 16.04 CPU Intel i9-9900K or AMD Ryzen 7 3700X (or higher) RAM DDR4 64GB (or higher) GPU RTX2080Ti or higher"},{"location":"tutorials/ad-hoc-simulation/digital-twin-simulation/MORAI_Sim-tutorial/#application-and-download","title":"Application and Download","text":"

    Only for AWF developers, trial license for 3 months can be issued. Download the application form and send to Hyeongseok Jeon

    After the trial license is issued, you can login to MORAI Sim:Drive via Launchers (Windows/Ubuntu)

    CAUTION: Do not use the Launchers in the following manual

    "},{"location":"tutorials/ad-hoc-simulation/digital-twin-simulation/MORAI_Sim-tutorial/#technical-documents","title":"Technical Documents","text":"

    as Oct. 2022, our simulation version is ver.22.R3 but the english manual is under construction.

    Be aware that the following manuals are for ver.22.R2

    • MORAI Sim:Drive Manual
    • ITRI BUS Odd tutorial
    • Tutorial for rosbag replay with Tacoma Airport
    "},{"location":"tutorials/ad-hoc-simulation/digital-twin-simulation/MORAI_Sim-tutorial/#technical-support","title":"Technical Support","text":"

    Hyeongseok Jeon will give full technical support

    • hsjeon@morai.ai
    • Hyeongseok Jeon#2355 in Discord
    "},{"location":"tutorials/ad-hoc-simulation/digital-twin-simulation/awsim-tutorial/","title":"AWSIM simulator","text":""},{"location":"tutorials/ad-hoc-simulation/digital-twin-simulation/awsim-tutorial/#awsim-simulator","title":"AWSIM simulator","text":"

    AWSIM is a simulator for Autoware development and testing, initially developed by TIER IV and still actively maintained.

    AWSIM Labs is a fork of AWSIM, developed under the Autoware Foundation, providing additional features and lighter resource usage.

    "},{"location":"tutorials/ad-hoc-simulation/digital-twin-simulation/awsim-tutorial/#feature-differences-from-the-awsim-and-awsim-labs","title":"Feature differences from the AWSIM and AWSIM Labs","text":""},{"location":"tutorials/ad-hoc-simulation/digital-twin-simulation/awsim-tutorial/#simulator-features","title":"Simulator Features","text":"Simulator Features AWSIM 1.2.3 AWSIM Labs 1.4.2 Rendering Pipeline HDRP URP Resource usage Heavy \ud83d\udc22 Light \ud83d\udc07 Can toggle vehicle keyboard control from GUI \u2705 \u2705 Radar sensor support \u2705 \u2705 Lidar sensor multiple returns support \u2705 \u274c Raw radar output \u2705 \u274c Lidar snow energy loss feature \u2705 \u274c Physically based vehicle dynamics simulation (VPP integration) \u274c \u2705 Can reset vehicle position on runtime \u274c \u2705 Select maps and vehicles at startup \u274c \u2705 Scenario simulator integrated into the same binary \u274c \u2705 Multi-lidars are enabled by default \u274c \u2705 Set vehicle pose and spawn objects from RViz2 \u274c \u2705 Visualize multiple cameras and move them dynamically \u274c \u2705 Turn sensors on/off during runtime \u274c \u2705 Graphics quality settings (Low/Medium/Ultra) \u274c \u2705 Bird\u2019s eye view camera option \u274c \u2705 Works with the latest Autoware main branch \u274c \u2705"},{"location":"tutorials/ad-hoc-simulation/digital-twin-simulation/awsim-tutorial/#development-features","title":"Development Features","text":"Development Features AWSIM AWSIM Labs Unity Version Unity 2021.1.7f1 Unity LTS 2022.3.36f1 CI for build \u2705 \u274c (disabled temporarily) Various regression unit tests \u2705 \u274c CI for documentation generation within PR \u274c \u2705 Main branch is protected with linear history \u274c \u2705 Pre-commit for code formatting \u274c \u2705 Documentation page shows PR branches before merge \u274c \u2705"},{"location":"tutorials/ad-hoc-simulation/digital-twin-simulation/awsim-tutorial/#awsim-labs","title":"AWSIM Labs","text":"

    AWSIM Labs supports Unity LTS 2022.3.36f1 and uses the Universal Render Pipeline (URP), optimized for lighter resource usage. It introduces several enhancements such as the ability to reset vehicle positions at runtime, support for multiple scenes and vehicle setups on runtime, and multi-lidars enabled by default.

    To get started with AWSIM Labs, please follow the instructions.

    "},{"location":"tutorials/ad-hoc-simulation/digital-twin-simulation/awsim-tutorial/#awsim","title":"AWSIM","text":"

    AWSIM runs on Unity 2021.1.7f1 using the High Definition Render Pipeline (HDRP), which requires more system resources.

    To get started with AWSIM, please follow the instructions provided by TIER IV.

    "},{"location":"tutorials/ad-hoc-simulation/digital-twin-simulation/carla-tutorial/","title":"CARLA simulator","text":""},{"location":"tutorials/ad-hoc-simulation/digital-twin-simulation/carla-tutorial/#carla-simulator","title":"CARLA simulator","text":"

    CARLA is a famous open-source simulator for the autonomous driving research. Now there is no official support to Autoware.universe, but some projects from communities support it. The document is to list these projects for anyone who wants to run Autoware with Carla. You can report issues to each project if there is any problem.

    "},{"location":"tutorials/ad-hoc-simulation/digital-twin-simulation/carla-tutorial/#project-lists-in-alphabetical-order","title":"Project lists (In alphabetical order)","text":""},{"location":"tutorials/ad-hoc-simulation/digital-twin-simulation/carla-tutorial/#autoware_carla_interface","title":"autoware_carla_interface","text":"

    Autoware ROS package to enables communication between Autoware and CARLA simulator for autonomous driving simulation. It is integrated in autoware.universe and actively maintained to stay compatible with the latest Autoware updates.

    • Package Link and Tutorial: autoware_carla_interface.
    "},{"location":"tutorials/ad-hoc-simulation/digital-twin-simulation/carla-tutorial/#carla_autoware_bridge","title":"carla_autoware_bridge","text":"

    An addition package to carla_ros_bridge to connect CARLA simulator to Autoware Universe software.

    • Project Link: carla_autoware_bridge
    • Tutorial: https://github.com/Robotics010/carla_autoware_bridge/blob/master/getting-started.md
    "},{"location":"tutorials/ad-hoc-simulation/digital-twin-simulation/carla-tutorial/#open_planner","title":"open_planner","text":"

    Integrated open source planner and related tools for autonomous navigation of autonomous vehicle and mobile robots

    • Project Link: open_planner
    • Tutorial: https://github.com/ZATiTech/open_planner/blob/humble/op_carla_bridge/README.md
    "},{"location":"tutorials/ad-hoc-simulation/digital-twin-simulation/carla-tutorial/#zenoh_carla_bridge","title":"zenoh_carla_bridge","text":"

    The project is mainly for controlling multiple vehicles in Carla. It uses Zenoh to bridge the Autoware and Carla and is able to distinguish different messages for different vehicles. Feel free to ask questions and report issues to autoware_carla_launch.

    • Project Link:
      • autoware_carla_launch: The integrated environment to run the bridge and Autoware easily.
      • zenoh_carla_bridge: The bridge implementation.
    • Tutorial:
      • The documentation of autoware_carla_launch: The official documentation, including installation and several usage scenarios.
      • Running Multiple Autoware-Powered Vehicles in Carla using Zenoh: The introduction on Autoware Tech Blog.
    "},{"location":"tutorials/components_evaluation/","title":"Localization Evaluation","text":""},{"location":"tutorials/components_evaluation/#localization-evaluation","title":"Localization Evaluation","text":"

    Warning

    Under Construction

    "},{"location":"tutorials/components_evaluation/localization_evaluation/urban-environment-evaluation/","title":"Urban environment evaluation","text":""},{"location":"tutorials/components_evaluation/localization_evaluation/urban-environment-evaluation/#introduction","title":"Introduction","text":""},{"location":"tutorials/components_evaluation/localization_evaluation/urban-environment-evaluation/#related-links","title":"Related Links","text":"

    Data Collection Documentation --> https://autowarefoundation.github.io/autoware-documentation/main/datasets/#istanbul-open-dataset

    "},{"location":"tutorials/components_evaluation/localization_evaluation/urban-environment-evaluation/#purpose","title":"Purpose","text":"

    The purpose of this test is to see the performance of the current NDT based default autoware localization system in an urban environment and evaluate its results. Our expectation is to see the deficiencies of the localization and understand which situations we need to improve.

    "},{"location":"tutorials/components_evaluation/localization_evaluation/urban-environment-evaluation/#test-environment","title":"Test Environment","text":"

    The test data was collected in Istanbul. It also includes some scenarios that may be challenging for localization, such as tunnels and bridges. You can find the entire route marked in the image below.

    "},{"location":"tutorials/components_evaluation/localization_evaluation/urban-environment-evaluation/#test-dataset-map","title":"Test Dataset & Map","text":"

    Test and Mapping datasets are same and the dataset contains data from the portable mapping kit used for general mapping purposes.

    The data contains data from the following sensors:

    • 1 x Applanix POS LVX GNSS/INS System
    • 1 x Hesai Pandar XT32 LiDAR

    You can find the data collected for testing and mapping in this Documentation.

    NOTE ! Since there was no velocity source coming from the vehicle during all these tests, the twist message coming from GNSS/INS was given to ekf_localizer as the linear&angular velocity source. In order to understand whether this increases the error in cases where the GNSS/INS error increases in the tunnel and how it affects the system, localization in the tunnel was tested by giving only the pose from the NDT, without giving this velocity to ekf_localizer. The video of this test is here. As seen in the video, when velocity is not given, localization in the tunnel deteriorates more quickly. It is also predicted that if the IMU Twist message combined (/localization/twist_estimator/twist_with_covariance) with the linear velocity from the vehicle is given instead of the GNSS/INS Twist message, the performance in the tunnel will increase. However, this test cannot be done with the current data.

    "},{"location":"tutorials/components_evaluation/localization_evaluation/urban-environment-evaluation/#expected-tests","title":"Expected Tests","text":"

    1.) Firstly, it is aimed to detect the points where the localization is completely broken and to roughly control the localization.

    2.) By extracting the metrics, it is aimed to see concretely how much the localization error has increased and what the level of performance is.

    "},{"location":"tutorials/components_evaluation/localization_evaluation/urban-environment-evaluation/#how-to-reproduce-tests","title":"How to Reproduce Tests","text":""},{"location":"tutorials/components_evaluation/localization_evaluation/urban-environment-evaluation/#test-with-raw-data","title":"Test With Raw Data","text":"

    If you test with raw data, you need to follow these Test With Raw Data instructions. Since you need to repeat all preprocessing operations on the input data in this test, you need to test by switching to the test branches in the sensor kit and individual params repos. You can perform these steps by following the instructions below.

    "},{"location":"tutorials/components_evaluation/localization_evaluation/urban-environment-evaluation/#installation","title":"Installation","text":"

    1.) Download and unpack a test map files.

    • You can also download the map manually.
    mkdir ~/autoware_ista_map\ngdown --id 1WPWmFCjV7eQee4kyBpmGNlX7awerCPxc -O ~/autoware_ista_map/\n

    NOTE ! You also need to add lanelet2_map.osm file to autoware_ista_map folder. Since no lanelet file is created for this map at the moment, you can run any lanelet2_map.osm file by placing it in this folder.

    2.) Download the test rosbag files.

    • You can also download the rosbag file manually.
    mkdir ~/autoware_ista_data\ngdown --id 1uta5Xr_ftV4jERxPNVqooDvWerK0dn89 -O ~/autoware_ista_data/\n
    "},{"location":"tutorials/components_evaluation/localization_evaluation/urban-environment-evaluation/#prepare-autoware-to-test","title":"Prepare Autoware to Test","text":"

    1.) Checkout autoware_launch:

    cd ~/autoware/src/launcher/autoware_launch/\ngit remote add autoware_launch https://github.com/meliketanrikulu/autoware_launch.git\ngit remote update\ngit checkout evaluate_localization_issue_7652\n

    2.) Checkout individual_params:

    cd ~/autoware/src/param/autoware_individual_params/\ngit remote add autoware_individual_params https://github.com/meliketanrikulu/autoware_individual_params.git\ngit remote update\ngit checkout evaluate_localization_issue_7652\n

    3.) Checkout sample_sensor_kit_launch:

    cd ~/autoware/src/sensor_kit/sample_sensor_kit_launch/\ngit remote add sample_sensor_kit_launch https://github.com/meliketanrikulu/sample_sensor_kit_launch.git\ngit remote update\ngit checkout evaluate_localization_issue_7652\n

    4.) Compile updated packages:

    cd ~/autoware\ncolcon build --symlink-install --packages-select sample_sensor_kit_launch autoware_individual_params autoware_launch common_sensor_launch\n
    "},{"location":"tutorials/components_evaluation/localization_evaluation/urban-environment-evaluation/#launch-autoware","title":"Launch Autoware","text":"
    source ~/autoware/install/setup.bash\nros2 launch autoware_launch logging_simulator.launch.xml map_path:=~/autoware_ista_map/ vehicle_model:=sample_vehicle sensor_model:=sample_sensor_kit\n
    "},{"location":"tutorials/components_evaluation/localization_evaluation/urban-environment-evaluation/#run-rosbag","title":"Run Rosbag","text":"
    source ~/autoware/install/setup.bash\nros2 bag play ~/autoware_ista_data/rosbag2_2024_09_11-17_53_54_0.db3\n
    "},{"location":"tutorials/components_evaluation/localization_evaluation/urban-environment-evaluation/#localization-test-only","title":"Localization Test Only","text":"

    If you only want to see the localization performance, follow the Localization Test Only instructions. For those who only want to perform localization tests, a second test bag file and a separate launch file have been created for this test. You can perform this test by following the instructions below.

    "},{"location":"tutorials/components_evaluation/localization_evaluation/urban-environment-evaluation/#installation_1","title":"Installation","text":"

    1.) Download and unpack a test map files.

    • You can also download the map manually.
    mkdir ~/autoware_ista_map\ngdown --id 1WPWmFCjV7eQee4kyBpmGNlX7awerCPxc -O ~/autoware_ista_map/\n

    NOTE ! You also need to add lanelet2_map.osm file to autoware_ista_map folder. Since no lanelet file is created for this map at the moment, you can run any lanelet2_map.osm file by placing it in this folder.

    2.) Download the test rosbag files.

    • You can also download the localization rosbag file manually.
    mkdir ~/autoware_ista_data\ngdown --id 1yEB5j74gPLLbkkf87cuCxUgHXTkgSZbn -O ~/autoware_ista_data/\n
    "},{"location":"tutorials/components_evaluation/localization_evaluation/urban-environment-evaluation/#prepare-autoware-to-test_1","title":"Prepare Autoware to Test","text":"

    1.) Checkout autoware_launch:

    cd ~/autoware/src/launcher/autoware_launch/\ngit remote add autoware_launch https://github.com/meliketanrikulu/autoware_launch.git\ngit remote update\ngit checkout evaluate_localization_issue_7652\n

    2.) Compile updated packages:

    cd ~/autoware\ncolcon build --symlink-install --packages-select autoware_launch\n
    "},{"location":"tutorials/components_evaluation/localization_evaluation/urban-environment-evaluation/#launch-autoware_1","title":"Launch Autoware","text":"
    source ~/autoware/install/setup.bash\nros2 launch autoware_launch urban_environment_localization_test.launch.xml map_path:=~/autoware_ista_map/\n
    "},{"location":"tutorials/components_evaluation/localization_evaluation/urban-environment-evaluation/#run-rosbag_1","title":"Run Rosbag","text":"
    source ~/autoware/install/setup.bash\nros2 bag play ~/autoware_ista_data/rosbag2_2024_09_12-14_59_58_0.db3\n
    "},{"location":"tutorials/components_evaluation/localization_evaluation/urban-environment-evaluation/#test-results","title":"Test Results","text":""},{"location":"tutorials/components_evaluation/localization_evaluation/urban-environment-evaluation/#test-1-simple-test-of-localization","title":"Test 1: Simple Test of Localization","text":"

    Simply put, how localization works in an urban environment was tested and the results were recorded on video. Here is the test video.

    We can see from video that there is a localization error in the longitudinal axis along the Eurasia tunnel. As expected, NDT based localization does not work properly here. However, since the NDT score cannot detect the distortion here, localization is not completely broken until the end of the tunnel. Localization is completely broken at the exit of the tunnel. I re-initialized the localization after vehicle exit the tunnel.

    From this point on, we move on to the bridge scenario. This is one of the bridges connecting the Bosphorus and was the longest bridge on our route. Here too, we thought that NDT-based localization might be disrupted. However, I did not observe any disruption.

    After this part there is another tunnel(Kagithane - Bomonti) and localization behaves similar to Eurasia tunnel. However, at the exit of this tunnel, it recovers localization on its own without the need for re-initialization.

    "},{"location":"tutorials/components_evaluation/localization_evaluation/urban-environment-evaluation/#test-1-summary","title":"Test 1 Summary","text":"

    In summary, the places where there was visible deterioration with our test route were the tunnels. This was already an expected situation. Apart from this, we anticipated that we could have problems on the bridges, but I could not observe any deterioration in the localization on the bridges. Of course, it is useful to remind you at this point that these tests were conducted with a single data.

    "},{"location":"tutorials/components_evaluation/localization_evaluation/urban-environment-evaluation/#test-2-comparing-results-with-ground-truth","title":"Test 2: Comparing Results with Ground Truth","text":"

    1.) How NDT score changed ?

    Along the route, only in a small section of the longest tunnel, the Eurasia Tunnel, did the NDT score remain below the expected values. However, there is a visible deterioration in localization in both tunnels on the route. As a result of this test, we saw that we cannot test the problems with the NDT score. You can see how the NDT score changes along the route in the image below. While looking at the visual, let's remember that the NDT score threshold is accepted as 2.3.

    NDT Score (Nearest Voxel Transformation Likelihood) Threshold = 2.3

    2.) Compare with Ground Truth

    Ground Truth : In these tests, the post-processed GNSS / INS data was used as ground truth. Since the error of this ground truth data also decreases in the tunnel environment, it is necessary to evaluate these regions by taking into account the Ground Truth error.

    During these tests, I compared the NDT and EKF exposures with Ground Truth and presented the results. I am sharing the test results below as png. However, if you want to examine this data in more detail, I have created an executable file for you to visualize and take a closer look at. You can access this executable file from here. Currently there is only a version that works on Ubuntu at this link, but I plan to add it for Windows as well. You need to follow these steps:

    cd /your/path/show_evaluation_ubuntu\n./pose_main\n

    You can also update configurations with changing /configs/evaluation_pose.yaml

    2d Trajectory :

    2D Error :

    3D Trajectory:

    3D Error:

    Lateral Error:

    Longitudinal Error:

    Roll-Pitch-Yaw:

    Roll-Pitch-Yaw Error:

    X-Y-Z:

    X-Y-Z Error:

    "},{"location":"tutorials/scenario-simulation/","title":"Scenario simulation","text":""},{"location":"tutorials/scenario-simulation/#scenario-simulation","title":"Scenario simulation","text":"

    Warning

    Under Construction

    "},{"location":"tutorials/scenario-simulation/planning-simulation/installation/","title":"Installation","text":""},{"location":"tutorials/scenario-simulation/planning-simulation/installation/#installation","title":"Installation","text":"

    This document contains step-by-step instruction on how to build AWF Autoware Core/Universe with scenario_simulator_v2.

    "},{"location":"tutorials/scenario-simulation/planning-simulation/installation/#prerequisites","title":"Prerequisites","text":"
    1. Autoware has been built and installed
    "},{"location":"tutorials/scenario-simulation/planning-simulation/installation/#how-to-build","title":"How to build","text":"
    1. Navigate to the Autoware workspace:

      cd autoware\n
    2. Import Simulator dependencies:

      vcs import src < simulator.repos\n
    3. Install dependent ROS packages:

      source /opt/ros/humble/setup.bash\nrosdep install -y --from-paths src --ignore-src --rosdistro $ROS_DISTRO\n
    4. Build the workspace:

      colcon build --symlink-install --cmake-args -DCMAKE_BUILD_TYPE=Release\n
    "},{"location":"tutorials/scenario-simulation/planning-simulation/random-test-simulation/","title":"Random test simulation","text":""},{"location":"tutorials/scenario-simulation/planning-simulation/random-test-simulation/#random-test-simulation","title":"Random test simulation","text":"

    Note

    Running the Scenario Simulator requires some additional steps on top of building and installing Autoware, so make sure that Scenario Simulator installation has been completed first before proceeding.

    "},{"location":"tutorials/scenario-simulation/planning-simulation/random-test-simulation/#running-steps","title":"Running steps","text":"
    1. Move to the workspace directory where Autoware and the Scenario Simulator have been built.

    2. Source the workspace setup script:

      source install/setup.bash\n
    3. Run the simulation:

      ros2 launch random_test_runner random_test.launch.py \\\narchitecture_type:=awf/universe/20240605 \\\nsensor_model:=sample_sensor_kit \\\nvehicle_model:=sample_vehicle\n

    For more information about supported parameters, refer to the random_test_runner documentation.

    "},{"location":"tutorials/scenario-simulation/planning-simulation/scenario-test-simulation/","title":"Scenario test simulation","text":""},{"location":"tutorials/scenario-simulation/planning-simulation/scenario-test-simulation/#scenario-test-simulation","title":"Scenario test simulation","text":"

    Note

    Running the Scenario Simulator requires some additional steps on top of building and installing Autoware, so make sure that Scenario Simulator installation has been completed first before proceeding.

    "},{"location":"tutorials/scenario-simulation/planning-simulation/scenario-test-simulation/#running-steps","title":"Running steps","text":"
    1. Move to the workspace directory where Autoware and the Scenario Simulator have been built.

    2. Source the workspace setup script:

      source install/setup.bash\n
    3. Run the simulation:

      ros2 launch scenario_test_runner scenario_test_runner.launch.py \\\narchitecture_type:=awf/universe/20240605 \\\nrecord:=false \\\nscenario:='$(find-pkg-share scenario_test_runner)/scenario/sample.yaml' \\\nsensor_model:=sample_sensor_kit \\\nvehicle_model:=sample_vehicle\n

    Reference video tutorials

    "},{"location":"tutorials/scenario-simulation/rosbag-replay-simulation/driving-log-replayer/","title":"Driving Log Replayer","text":""},{"location":"tutorials/scenario-simulation/rosbag-replay-simulation/driving-log-replayer/#driving-log-replayer","title":"Driving Log Replayer","text":"

    Driving Log Replayer is an evaluation tool for Autoware. To get started, follow the official instruction provided by TIER IV.

    "}]} \ No newline at end of file diff --git a/pr-631/sitemap.xml b/pr-631/sitemap.xml index 97eb79a4761..7a42ad17544 100644 --- a/pr-631/sitemap.xml +++ b/pr-631/sitemap.xml @@ -2,1717 +2,1717 @@ https://autowarefoundation.github.io/autoware-documentation/pr-631/ - 2024-11-27 + 2024-11-28 daily https://autowarefoundation.github.io/autoware-documentation/pr-631/autoware-competitions/ - 2024-11-27 + 2024-11-28 daily https://autowarefoundation.github.io/autoware-documentation/pr-631/contributing/ - 2024-11-27 + 2024-11-28 daily https://autowarefoundation.github.io/autoware-documentation/pr-631/contributing/license/ - 2024-11-27 + 2024-11-28 daily https://autowarefoundation.github.io/autoware-documentation/pr-631/contributing/coding-guidelines/ - 2024-11-27 + 2024-11-28 daily https://autowarefoundation.github.io/autoware-documentation/pr-631/contributing/coding-guidelines/languages/cmake/ - 2024-11-27 + 2024-11-28 daily https://autowarefoundation.github.io/autoware-documentation/pr-631/contributing/coding-guidelines/languages/cpp/ - 2024-11-27 + 2024-11-28 daily https://autowarefoundation.github.io/autoware-documentation/pr-631/contributing/coding-guidelines/languages/docker/ - 2024-11-27 + 2024-11-28 daily https://autowarefoundation.github.io/autoware-documentation/pr-631/contributing/coding-guidelines/languages/github-actions/ - 2024-11-27 + 2024-11-28 daily https://autowarefoundation.github.io/autoware-documentation/pr-631/contributing/coding-guidelines/languages/markdown/ - 2024-11-27 + 2024-11-28 daily https://autowarefoundation.github.io/autoware-documentation/pr-631/contributing/coding-guidelines/languages/package-xml/ - 2024-11-27 + 2024-11-28 daily https://autowarefoundation.github.io/autoware-documentation/pr-631/contributing/coding-guidelines/languages/python/ - 2024-11-27 + 2024-11-28 daily https://autowarefoundation.github.io/autoware-documentation/pr-631/contributing/coding-guidelines/languages/shell-scripts/ - 2024-11-27 + 2024-11-28 daily https://autowarefoundation.github.io/autoware-documentation/pr-631/contributing/coding-guidelines/ros-nodes/class-design/ - 2024-11-27 + 2024-11-28 daily https://autowarefoundation.github.io/autoware-documentation/pr-631/contributing/coding-guidelines/ros-nodes/console-logging/ - 2024-11-27 + 2024-11-28 daily https://autowarefoundation.github.io/autoware-documentation/pr-631/contributing/coding-guidelines/ros-nodes/coordinate-system/ - 2024-11-27 + 2024-11-28 daily https://autowarefoundation.github.io/autoware-documentation/pr-631/contributing/coding-guidelines/ros-nodes/directory-structure/ - 2024-11-27 + 2024-11-28 daily https://autowarefoundation.github.io/autoware-documentation/pr-631/contributing/coding-guidelines/ros-nodes/launch-files/ - 2024-11-27 + 2024-11-28 daily https://autowarefoundation.github.io/autoware-documentation/pr-631/contributing/coding-guidelines/ros-nodes/message-guidelines/ - 2024-11-27 + 2024-11-28 daily https://autowarefoundation.github.io/autoware-documentation/pr-631/contributing/coding-guidelines/ros-nodes/parameters/ - 2024-11-27 + 2024-11-28 daily https://autowarefoundation.github.io/autoware-documentation/pr-631/contributing/coding-guidelines/ros-nodes/task-scheduling/ - 2024-11-27 + 2024-11-28 daily https://autowarefoundation.github.io/autoware-documentation/pr-631/contributing/coding-guidelines/ros-nodes/topic-namespaces/ - 2024-11-27 + 2024-11-28 daily https://autowarefoundation.github.io/autoware-documentation/pr-631/contributing/coding-guidelines/ros-nodes/topic-message-handling/ - 2024-11-27 + 2024-11-28 daily https://autowarefoundation.github.io/autoware-documentation/pr-631/contributing/coding-guidelines/ros-nodes/topic-message-handling/supp-intra-process-comm/ - 2024-11-27 + 2024-11-28 daily https://autowarefoundation.github.io/autoware-documentation/pr-631/contributing/coding-guidelines/ros-nodes/topic-message-handling/supp-wait_set/ - 2024-11-27 + 2024-11-28 daily https://autowarefoundation.github.io/autoware-documentation/pr-631/contributing/discussion-guidelines/ - 2024-11-27 + 2024-11-28 daily https://autowarefoundation.github.io/autoware-documentation/pr-631/contributing/documentation-guidelines/ - 2024-11-27 + 2024-11-28 daily https://autowarefoundation.github.io/autoware-documentation/pr-631/contributing/pull-request-guidelines/ - 2024-11-27 + 2024-11-28 daily https://autowarefoundation.github.io/autoware-documentation/pr-631/contributing/pull-request-guidelines/ai-pr-review/ - 2024-11-27 + 2024-11-28 daily https://autowarefoundation.github.io/autoware-documentation/pr-631/contributing/pull-request-guidelines/ci-checks/ - 2024-11-27 + 2024-11-28 daily https://autowarefoundation.github.io/autoware-documentation/pr-631/contributing/pull-request-guidelines/code-owners/ - 2024-11-27 + 2024-11-28 daily https://autowarefoundation.github.io/autoware-documentation/pr-631/contributing/pull-request-guidelines/commit-guidelines/ - 2024-11-27 + 2024-11-28 daily https://autowarefoundation.github.io/autoware-documentation/pr-631/contributing/pull-request-guidelines/review-guidelines/ - 2024-11-27 + 2024-11-28 daily https://autowarefoundation.github.io/autoware-documentation/pr-631/contributing/pull-request-guidelines/review-tips/ - 2024-11-27 + 2024-11-28 daily https://autowarefoundation.github.io/autoware-documentation/pr-631/contributing/testing-guidelines/ - 2024-11-27 + 2024-11-28 daily https://autowarefoundation.github.io/autoware-documentation/pr-631/contributing/testing-guidelines/integration-testing/ - 2024-11-27 + 2024-11-28 daily https://autowarefoundation.github.io/autoware-documentation/pr-631/contributing/testing-guidelines/unit-testing/ - 2024-11-27 + 2024-11-28 daily https://autowarefoundation.github.io/autoware-documentation/pr-631/datasets/ - 2024-11-27 + 2024-11-28 daily https://autowarefoundation.github.io/autoware-documentation/pr-631/datasets/data-anonymization/ - 2024-11-27 + 2024-11-28 daily https://autowarefoundation.github.io/autoware-documentation/pr-631/design/ - 2024-11-27 + 2024-11-28 daily https://autowarefoundation.github.io/autoware-documentation/pr-631/design/autoware-architecture/ - 2024-11-27 + 2024-11-28 daily https://autowarefoundation.github.io/autoware-documentation/pr-631/design/autoware-architecture/control/ - 2024-11-27 + 2024-11-28 daily https://autowarefoundation.github.io/autoware-documentation/pr-631/design/autoware-architecture/localization/ - 2024-11-27 + 2024-11-28 daily https://autowarefoundation.github.io/autoware-documentation/pr-631/design/autoware-architecture/map/ - 2024-11-27 + 2024-11-28 daily https://autowarefoundation.github.io/autoware-documentation/pr-631/design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/ - 2024-11-27 + 2024-11-28 daily https://autowarefoundation.github.io/autoware-documentation/pr-631/design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_area/ - 2024-11-27 + 2024-11-28 daily https://autowarefoundation.github.io/autoware-documentation/pr-631/design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_crosswalk/ - 2024-11-27 + 2024-11-28 daily https://autowarefoundation.github.io/autoware-documentation/pr-631/design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_intersection/ - 2024-11-27 + 2024-11-28 daily https://autowarefoundation.github.io/autoware-documentation/pr-631/design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_lane/ - 2024-11-27 + 2024-11-28 daily https://autowarefoundation.github.io/autoware-documentation/pr-631/design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_others/ - 2024-11-27 + 2024-11-28 daily https://autowarefoundation.github.io/autoware-documentation/pr-631/design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_stop_line/ - 2024-11-27 + 2024-11-28 daily https://autowarefoundation.github.io/autoware-documentation/pr-631/design/autoware-architecture/map/map-requirements/vector-map-requirements-overview/category_traffic_light/ - 2024-11-27 + 2024-11-28 daily https://autowarefoundation.github.io/autoware-documentation/pr-631/design/autoware-architecture/node-diagram/ - 2024-11-27 + 2024-11-28 daily https://autowarefoundation.github.io/autoware-documentation/pr-631/design/autoware-architecture/perception/ - 2024-11-27 + 2024-11-28 daily https://autowarefoundation.github.io/autoware-documentation/pr-631/design/autoware-architecture/perception/reference_implementation/ - 2024-11-27 + 2024-11-28 daily https://autowarefoundation.github.io/autoware-documentation/pr-631/design/autoware-architecture/perception/reference-implementations/radar-based-3d-detector/faraway-object-detection/ - 2024-11-27 + 2024-11-28 daily https://autowarefoundation.github.io/autoware-documentation/pr-631/design/autoware-architecture/perception/reference-implementations/radar-based-3d-detector/radar-based-3d-detector/ - 2024-11-27 + 2024-11-28 daily https://autowarefoundation.github.io/autoware-documentation/pr-631/design/autoware-architecture/planning/ - 2024-11-27 + 2024-11-28 daily https://autowarefoundation.github.io/autoware-documentation/pr-631/design/autoware-architecture/sensing/ - 2024-11-27 + 2024-11-28 daily https://autowarefoundation.github.io/autoware-documentation/pr-631/design/autoware-architecture/sensing/data-types/gnss-ins-data/ - 2024-11-27 + 2024-11-28 daily https://autowarefoundation.github.io/autoware-documentation/pr-631/design/autoware-architecture/sensing/data-types/image/ - 2024-11-27 + 2024-11-28 daily https://autowarefoundation.github.io/autoware-documentation/pr-631/design/autoware-architecture/sensing/data-types/point-cloud/ - 2024-11-27 + 2024-11-28 daily https://autowarefoundation.github.io/autoware-documentation/pr-631/design/autoware-architecture/sensing/data-types/ultrasonics-data/ - 2024-11-27 + 2024-11-28 daily https://autowarefoundation.github.io/autoware-documentation/pr-631/design/autoware-architecture/sensing/data-types/radar-data/radar-objects-data/ - 2024-11-27 + 2024-11-28 daily https://autowarefoundation.github.io/autoware-documentation/pr-631/design/autoware-architecture/sensing/data-types/radar-data/radar-pointcloud-data/ - 2024-11-27 + 2024-11-28 daily https://autowarefoundation.github.io/autoware-documentation/pr-631/design/autoware-architecture/sensing/data-types/radar-data/reference-implementations/data-message/ - 2024-11-27 + 2024-11-28 daily https://autowarefoundation.github.io/autoware-documentation/pr-631/design/autoware-architecture/sensing/data-types/radar-data/reference-implementations/device-driver/ - 2024-11-27 + 2024-11-28 daily https://autowarefoundation.github.io/autoware-documentation/pr-631/design/autoware-architecture/vehicle/ - 2024-11-27 + 2024-11-28 daily https://autowarefoundation.github.io/autoware-documentation/pr-631/design/autoware-concepts/ - 2024-11-27 + 2024-11-28 daily https://autowarefoundation.github.io/autoware-documentation/pr-631/design/autoware-concepts/difference-from-ai-and-auto/ - 2024-11-27 + 2024-11-28 daily https://autowarefoundation.github.io/autoware-documentation/pr-631/design/autoware-interfaces/ - 2024-11-27 + 2024-11-28 daily https://autowarefoundation.github.io/autoware-documentation/pr-631/design/autoware-interfaces/ad-api/ - 2024-11-27 + 2024-11-28 daily https://autowarefoundation.github.io/autoware-documentation/pr-631/design/autoware-interfaces/ad-api/release/ - 2024-11-27 + 2024-11-28 daily https://autowarefoundation.github.io/autoware-documentation/pr-631/design/autoware-interfaces/ad-api/features/cooperation/ - 2024-11-27 + 2024-11-28 daily https://autowarefoundation.github.io/autoware-documentation/pr-631/design/autoware-interfaces/ad-api/features/diagnostics/ - 2024-11-27 + 2024-11-28 daily https://autowarefoundation.github.io/autoware-documentation/pr-631/design/autoware-interfaces/ad-api/features/fail-safe/ - 2024-11-27 + 2024-11-28 daily https://autowarefoundation.github.io/autoware-documentation/pr-631/design/autoware-interfaces/ad-api/features/heartbeat/ - 2024-11-27 + 2024-11-28 daily https://autowarefoundation.github.io/autoware-documentation/pr-631/design/autoware-interfaces/ad-api/features/interface/ - 2024-11-27 + 2024-11-28 daily https://autowarefoundation.github.io/autoware-documentation/pr-631/design/autoware-interfaces/ad-api/features/localization/ - 2024-11-27 + 2024-11-28 daily https://autowarefoundation.github.io/autoware-documentation/pr-631/design/autoware-interfaces/ad-api/features/manual-control/ - 2024-11-27 + 2024-11-28 daily https://autowarefoundation.github.io/autoware-documentation/pr-631/design/autoware-interfaces/ad-api/features/motion/ - 2024-11-27 + 2024-11-28 daily https://autowarefoundation.github.io/autoware-documentation/pr-631/design/autoware-interfaces/ad-api/features/operation_mode/ - 2024-11-27 + 2024-11-28 daily https://autowarefoundation.github.io/autoware-documentation/pr-631/design/autoware-interfaces/ad-api/features/perception/ - 2024-11-27 + 2024-11-28 daily https://autowarefoundation.github.io/autoware-documentation/pr-631/design/autoware-interfaces/ad-api/features/planning-factors/ - 2024-11-27 + 2024-11-28 daily https://autowarefoundation.github.io/autoware-documentation/pr-631/design/autoware-interfaces/ad-api/features/routing/ - 2024-11-27 + 2024-11-28 daily https://autowarefoundation.github.io/autoware-documentation/pr-631/design/autoware-interfaces/ad-api/features/vehicle-doors/ - 2024-11-27 + 2024-11-28 daily https://autowarefoundation.github.io/autoware-documentation/pr-631/design/autoware-interfaces/ad-api/features/vehicle-status/ - 2024-11-27 + 2024-11-28 daily https://autowarefoundation.github.io/autoware-documentation/pr-631/design/autoware-interfaces/ad-api/list/ - 2024-11-27 + 2024-11-28 daily https://autowarefoundation.github.io/autoware-documentation/pr-631/design/autoware-interfaces/ad-api/list/api/fail_safe/mrm_state/ - 2024-11-27 + 2024-11-28 daily https://autowarefoundation.github.io/autoware-documentation/pr-631/design/autoware-interfaces/ad-api/list/api/interface/version/ - 2024-11-27 + 2024-11-28 daily https://autowarefoundation.github.io/autoware-documentation/pr-631/design/autoware-interfaces/ad-api/list/api/local/command/acceleration/ - 2024-11-27 + 2024-11-28 daily https://autowarefoundation.github.io/autoware-documentation/pr-631/design/autoware-interfaces/ad-api/list/api/local/command/gear/ - 2024-11-27 + 2024-11-28 daily https://autowarefoundation.github.io/autoware-documentation/pr-631/design/autoware-interfaces/ad-api/list/api/local/command/hazard_lights/ - 2024-11-27 + 2024-11-28 daily https://autowarefoundation.github.io/autoware-documentation/pr-631/design/autoware-interfaces/ad-api/list/api/local/command/pedal/ - 2024-11-27 + 2024-11-28 daily https://autowarefoundation.github.io/autoware-documentation/pr-631/design/autoware-interfaces/ad-api/list/api/local/command/steering/ - 2024-11-27 + 2024-11-28 daily https://autowarefoundation.github.io/autoware-documentation/pr-631/design/autoware-interfaces/ad-api/list/api/local/command/turn_indicators/ - 2024-11-27 + 2024-11-28 daily https://autowarefoundation.github.io/autoware-documentation/pr-631/design/autoware-interfaces/ad-api/list/api/local/control_mode/list/ - 2024-11-27 + 2024-11-28 daily https://autowarefoundation.github.io/autoware-documentation/pr-631/design/autoware-interfaces/ad-api/list/api/local/control_mode/select/ - 2024-11-27 + 2024-11-28 daily https://autowarefoundation.github.io/autoware-documentation/pr-631/design/autoware-interfaces/ad-api/list/api/local/control_mode/status/ - 2024-11-27 + 2024-11-28 daily https://autowarefoundation.github.io/autoware-documentation/pr-631/design/autoware-interfaces/ad-api/list/api/local/operator/status/ - 2024-11-27 + 2024-11-28 daily https://autowarefoundation.github.io/autoware-documentation/pr-631/design/autoware-interfaces/ad-api/list/api/localization/initialization_state/ - 2024-11-27 + 2024-11-28 daily https://autowarefoundation.github.io/autoware-documentation/pr-631/design/autoware-interfaces/ad-api/list/api/localization/initialize/ - 2024-11-27 + 2024-11-28 daily https://autowarefoundation.github.io/autoware-documentation/pr-631/design/autoware-interfaces/ad-api/list/api/motion/accept_start/ - 2024-11-27 + 2024-11-28 daily https://autowarefoundation.github.io/autoware-documentation/pr-631/design/autoware-interfaces/ad-api/list/api/motion/state/ - 2024-11-27 + 2024-11-28 daily https://autowarefoundation.github.io/autoware-documentation/pr-631/design/autoware-interfaces/ad-api/list/api/operation_mode/change_to_autonomous/ - 2024-11-27 + 2024-11-28 daily https://autowarefoundation.github.io/autoware-documentation/pr-631/design/autoware-interfaces/ad-api/list/api/operation_mode/change_to_local/ - 2024-11-27 + 2024-11-28 daily https://autowarefoundation.github.io/autoware-documentation/pr-631/design/autoware-interfaces/ad-api/list/api/operation_mode/change_to_remote/ - 2024-11-27 + 2024-11-28 daily https://autowarefoundation.github.io/autoware-documentation/pr-631/design/autoware-interfaces/ad-api/list/api/operation_mode/change_to_stop/ - 2024-11-27 + 2024-11-28 daily https://autowarefoundation.github.io/autoware-documentation/pr-631/design/autoware-interfaces/ad-api/list/api/operation_mode/disable_autoware_control/ - 2024-11-27 + 2024-11-28 daily https://autowarefoundation.github.io/autoware-documentation/pr-631/design/autoware-interfaces/ad-api/list/api/operation_mode/enable_autoware_control/ - 2024-11-27 + 2024-11-28 daily https://autowarefoundation.github.io/autoware-documentation/pr-631/design/autoware-interfaces/ad-api/list/api/operation_mode/state/ - 2024-11-27 + 2024-11-28 daily https://autowarefoundation.github.io/autoware-documentation/pr-631/design/autoware-interfaces/ad-api/list/api/perception/objects/ - 2024-11-27 + 2024-11-28 daily https://autowarefoundation.github.io/autoware-documentation/pr-631/design/autoware-interfaces/ad-api/list/api/planning/steering_factors/ - 2024-11-27 + 2024-11-28 daily https://autowarefoundation.github.io/autoware-documentation/pr-631/design/autoware-interfaces/ad-api/list/api/planning/velocity_factors/ - 2024-11-27 + 2024-11-28 daily https://autowarefoundation.github.io/autoware-documentation/pr-631/design/autoware-interfaces/ad-api/list/api/planning/cooperation/get_policies/ - 2024-11-27 + 2024-11-28 daily https://autowarefoundation.github.io/autoware-documentation/pr-631/design/autoware-interfaces/ad-api/list/api/planning/cooperation/set_commands/ - 2024-11-27 + 2024-11-28 daily https://autowarefoundation.github.io/autoware-documentation/pr-631/design/autoware-interfaces/ad-api/list/api/planning/cooperation/set_policies/ - 2024-11-27 + 2024-11-28 daily https://autowarefoundation.github.io/autoware-documentation/pr-631/design/autoware-interfaces/ad-api/list/api/remote/command/acceleration/ - 2024-11-27 + 2024-11-28 daily https://autowarefoundation.github.io/autoware-documentation/pr-631/design/autoware-interfaces/ad-api/list/api/remote/command/gear/ - 2024-11-27 + 2024-11-28 daily https://autowarefoundation.github.io/autoware-documentation/pr-631/design/autoware-interfaces/ad-api/list/api/remote/command/hazard_lights/ - 2024-11-27 + 2024-11-28 daily https://autowarefoundation.github.io/autoware-documentation/pr-631/design/autoware-interfaces/ad-api/list/api/remote/command/pedal/ - 2024-11-27 + 2024-11-28 daily https://autowarefoundation.github.io/autoware-documentation/pr-631/design/autoware-interfaces/ad-api/list/api/remote/command/steering/ - 2024-11-27 + 2024-11-28 daily https://autowarefoundation.github.io/autoware-documentation/pr-631/design/autoware-interfaces/ad-api/list/api/remote/command/turn_indicators/ - 2024-11-27 + 2024-11-28 daily https://autowarefoundation.github.io/autoware-documentation/pr-631/design/autoware-interfaces/ad-api/list/api/remote/control_mode/list/ - 2024-11-27 + 2024-11-28 daily https://autowarefoundation.github.io/autoware-documentation/pr-631/design/autoware-interfaces/ad-api/list/api/remote/control_mode/select/ - 2024-11-27 + 2024-11-28 daily https://autowarefoundation.github.io/autoware-documentation/pr-631/design/autoware-interfaces/ad-api/list/api/remote/control_mode/status/ - 2024-11-27 + 2024-11-28 daily https://autowarefoundation.github.io/autoware-documentation/pr-631/design/autoware-interfaces/ad-api/list/api/remote/operator/status/ - 2024-11-27 + 2024-11-28 daily https://autowarefoundation.github.io/autoware-documentation/pr-631/design/autoware-interfaces/ad-api/list/api/routing/change_route/ - 2024-11-27 + 2024-11-28 daily https://autowarefoundation.github.io/autoware-documentation/pr-631/design/autoware-interfaces/ad-api/list/api/routing/change_route_points/ - 2024-11-27 + 2024-11-28 daily https://autowarefoundation.github.io/autoware-documentation/pr-631/design/autoware-interfaces/ad-api/list/api/routing/clear_route/ - 2024-11-27 + 2024-11-28 daily https://autowarefoundation.github.io/autoware-documentation/pr-631/design/autoware-interfaces/ad-api/list/api/routing/route/ - 2024-11-27 + 2024-11-28 daily https://autowarefoundation.github.io/autoware-documentation/pr-631/design/autoware-interfaces/ad-api/list/api/routing/set_route/ - 2024-11-27 + 2024-11-28 daily https://autowarefoundation.github.io/autoware-documentation/pr-631/design/autoware-interfaces/ad-api/list/api/routing/set_route_points/ - 2024-11-27 + 2024-11-28 daily https://autowarefoundation.github.io/autoware-documentation/pr-631/design/autoware-interfaces/ad-api/list/api/routing/state/ - 2024-11-27 + 2024-11-28 daily https://autowarefoundation.github.io/autoware-documentation/pr-631/design/autoware-interfaces/ad-api/list/api/system/heartbeat/ - 2024-11-27 + 2024-11-28 daily https://autowarefoundation.github.io/autoware-documentation/pr-631/design/autoware-interfaces/ad-api/list/api/system/diagnostics/status/ - 2024-11-27 + 2024-11-28 daily https://autowarefoundation.github.io/autoware-documentation/pr-631/design/autoware-interfaces/ad-api/list/api/system/diagnostics/struct/ - 2024-11-27 + 2024-11-28 daily https://autowarefoundation.github.io/autoware-documentation/pr-631/design/autoware-interfaces/ad-api/list/api/vehicle/dimensions/ - 2024-11-27 + 2024-11-28 daily https://autowarefoundation.github.io/autoware-documentation/pr-631/design/autoware-interfaces/ad-api/list/api/vehicle/kinematics/ - 2024-11-27 + 2024-11-28 daily https://autowarefoundation.github.io/autoware-documentation/pr-631/design/autoware-interfaces/ad-api/list/api/vehicle/status/ - 2024-11-27 + 2024-11-28 daily https://autowarefoundation.github.io/autoware-documentation/pr-631/design/autoware-interfaces/ad-api/list/api/vehicle/doors/command/ - 2024-11-27 + 2024-11-28 daily https://autowarefoundation.github.io/autoware-documentation/pr-631/design/autoware-interfaces/ad-api/list/api/vehicle/doors/layout/ - 2024-11-27 + 2024-11-28 daily https://autowarefoundation.github.io/autoware-documentation/pr-631/design/autoware-interfaces/ad-api/list/api/vehicle/doors/status/ - 2024-11-27 + 2024-11-28 daily https://autowarefoundation.github.io/autoware-documentation/pr-631/design/autoware-interfaces/ad-api/stories/bus-service/ - 2024-11-27 + 2024-11-28 daily https://autowarefoundation.github.io/autoware-documentation/pr-631/design/autoware-interfaces/ad-api/stories/taxi-service/ - 2024-11-27 + 2024-11-28 daily https://autowarefoundation.github.io/autoware-documentation/pr-631/design/autoware-interfaces/ad-api/types/ - 2024-11-27 + 2024-11-28 daily https://autowarefoundation.github.io/autoware-documentation/pr-631/design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/AccelerationCommand/ - 2024-11-27 + 2024-11-28 daily https://autowarefoundation.github.io/autoware-documentation/pr-631/design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/CooperationCommand/ - 2024-11-27 + 2024-11-28 daily https://autowarefoundation.github.io/autoware-documentation/pr-631/design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/CooperationDecision/ - 2024-11-27 + 2024-11-28 daily https://autowarefoundation.github.io/autoware-documentation/pr-631/design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/CooperationPolicy/ - 2024-11-27 + 2024-11-28 daily https://autowarefoundation.github.io/autoware-documentation/pr-631/design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/CooperationStatus/ - 2024-11-27 + 2024-11-28 daily https://autowarefoundation.github.io/autoware-documentation/pr-631/design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/DiagGraphStatus/ - 2024-11-27 + 2024-11-28 daily https://autowarefoundation.github.io/autoware-documentation/pr-631/design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/DiagGraphStruct/ - 2024-11-27 + 2024-11-28 daily https://autowarefoundation.github.io/autoware-documentation/pr-631/design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/DiagLinkStruct/ - 2024-11-27 + 2024-11-28 daily https://autowarefoundation.github.io/autoware-documentation/pr-631/design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/DiagNodeStatus/ - 2024-11-27 + 2024-11-28 daily https://autowarefoundation.github.io/autoware-documentation/pr-631/design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/DiagNodeStruct/ - 2024-11-27 + 2024-11-28 daily https://autowarefoundation.github.io/autoware-documentation/pr-631/design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/DoorCommand/ - 2024-11-27 + 2024-11-28 daily https://autowarefoundation.github.io/autoware-documentation/pr-631/design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/DoorLayout/ - 2024-11-27 + 2024-11-28 daily https://autowarefoundation.github.io/autoware-documentation/pr-631/design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/DoorStatus/ - 2024-11-27 + 2024-11-28 daily https://autowarefoundation.github.io/autoware-documentation/pr-631/design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/DoorStatusArray/ - 2024-11-27 + 2024-11-28 daily https://autowarefoundation.github.io/autoware-documentation/pr-631/design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/DynamicObject/ - 2024-11-27 + 2024-11-28 daily https://autowarefoundation.github.io/autoware-documentation/pr-631/design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/DynamicObjectArray/ - 2024-11-27 + 2024-11-28 daily https://autowarefoundation.github.io/autoware-documentation/pr-631/design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/DynamicObjectKinematics/ - 2024-11-27 + 2024-11-28 daily https://autowarefoundation.github.io/autoware-documentation/pr-631/design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/DynamicObjectPath/ - 2024-11-27 + 2024-11-28 daily https://autowarefoundation.github.io/autoware-documentation/pr-631/design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/Gear/ - 2024-11-27 + 2024-11-28 daily https://autowarefoundation.github.io/autoware-documentation/pr-631/design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/GearCommand/ - 2024-11-27 + 2024-11-28 daily https://autowarefoundation.github.io/autoware-documentation/pr-631/design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/HazardLights/ - 2024-11-27 + 2024-11-28 daily https://autowarefoundation.github.io/autoware-documentation/pr-631/design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/HazardLightsCommand/ - 2024-11-27 + 2024-11-28 daily https://autowarefoundation.github.io/autoware-documentation/pr-631/design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/Heartbeat/ - 2024-11-27 + 2024-11-28 daily https://autowarefoundation.github.io/autoware-documentation/pr-631/design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/LocalizationInitializationState/ - 2024-11-27 + 2024-11-28 daily https://autowarefoundation.github.io/autoware-documentation/pr-631/design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/ManualControlMode/ - 2024-11-27 + 2024-11-28 daily https://autowarefoundation.github.io/autoware-documentation/pr-631/design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/ManualControlModeStatus/ - 2024-11-27 + 2024-11-28 daily https://autowarefoundation.github.io/autoware-documentation/pr-631/design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/ManualOperatorStatus/ - 2024-11-27 + 2024-11-28 daily https://autowarefoundation.github.io/autoware-documentation/pr-631/design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/MotionState/ - 2024-11-27 + 2024-11-28 daily https://autowarefoundation.github.io/autoware-documentation/pr-631/design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/MrmState/ - 2024-11-27 + 2024-11-28 daily https://autowarefoundation.github.io/autoware-documentation/pr-631/design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/ObjectClassification/ - 2024-11-27 + 2024-11-28 daily https://autowarefoundation.github.io/autoware-documentation/pr-631/design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/OperationModeState/ - 2024-11-27 + 2024-11-28 daily https://autowarefoundation.github.io/autoware-documentation/pr-631/design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/PedalCommand/ - 2024-11-27 + 2024-11-28 daily https://autowarefoundation.github.io/autoware-documentation/pr-631/design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/ResponseStatus/ - 2024-11-27 + 2024-11-28 daily https://autowarefoundation.github.io/autoware-documentation/pr-631/design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/Route/ - 2024-11-27 + 2024-11-28 daily https://autowarefoundation.github.io/autoware-documentation/pr-631/design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/RouteData/ - 2024-11-27 + 2024-11-28 daily https://autowarefoundation.github.io/autoware-documentation/pr-631/design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/RouteOption/ - 2024-11-27 + 2024-11-28 daily https://autowarefoundation.github.io/autoware-documentation/pr-631/design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/RoutePrimitive/ - 2024-11-27 + 2024-11-28 daily https://autowarefoundation.github.io/autoware-documentation/pr-631/design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/RouteSegment/ - 2024-11-27 + 2024-11-28 daily https://autowarefoundation.github.io/autoware-documentation/pr-631/design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/RouteState/ - 2024-11-27 + 2024-11-28 daily https://autowarefoundation.github.io/autoware-documentation/pr-631/design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/SteeringCommand/ - 2024-11-27 + 2024-11-28 daily https://autowarefoundation.github.io/autoware-documentation/pr-631/design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/SteeringFactor/ - 2024-11-27 + 2024-11-28 daily https://autowarefoundation.github.io/autoware-documentation/pr-631/design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/SteeringFactorArray/ - 2024-11-27 + 2024-11-28 daily https://autowarefoundation.github.io/autoware-documentation/pr-631/design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/TurnIndicators/ - 2024-11-27 + 2024-11-28 daily https://autowarefoundation.github.io/autoware-documentation/pr-631/design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/TurnIndicatorsCommand/ - 2024-11-27 + 2024-11-28 daily https://autowarefoundation.github.io/autoware-documentation/pr-631/design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/VehicleDimensions/ - 2024-11-27 + 2024-11-28 daily https://autowarefoundation.github.io/autoware-documentation/pr-631/design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/VehicleKinematics/ - 2024-11-27 + 2024-11-28 daily https://autowarefoundation.github.io/autoware-documentation/pr-631/design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/VehicleStatus/ - 2024-11-27 + 2024-11-28 daily https://autowarefoundation.github.io/autoware-documentation/pr-631/design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/VelocityFactor/ - 2024-11-27 + 2024-11-28 daily https://autowarefoundation.github.io/autoware-documentation/pr-631/design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/msg/VelocityFactorArray/ - 2024-11-27 + 2024-11-28 daily https://autowarefoundation.github.io/autoware-documentation/pr-631/design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/srv/AcceptStart/ - 2024-11-27 + 2024-11-28 daily https://autowarefoundation.github.io/autoware-documentation/pr-631/design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/srv/ChangeOperationMode/ - 2024-11-27 + 2024-11-28 daily https://autowarefoundation.github.io/autoware-documentation/pr-631/design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/srv/ClearRoute/ - 2024-11-27 + 2024-11-28 daily https://autowarefoundation.github.io/autoware-documentation/pr-631/design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/srv/GetCooperationPolicies/ - 2024-11-27 + 2024-11-28 daily https://autowarefoundation.github.io/autoware-documentation/pr-631/design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/srv/GetDoorLayout/ - 2024-11-27 + 2024-11-28 daily https://autowarefoundation.github.io/autoware-documentation/pr-631/design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/srv/GetVehicleDimensions/ - 2024-11-27 + 2024-11-28 daily https://autowarefoundation.github.io/autoware-documentation/pr-631/design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/srv/InitializeLocalization/ - 2024-11-27 + 2024-11-28 daily https://autowarefoundation.github.io/autoware-documentation/pr-631/design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/srv/ListManualControlMode/ - 2024-11-27 + 2024-11-28 daily https://autowarefoundation.github.io/autoware-documentation/pr-631/design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/srv/SelectManualControlMode/ - 2024-11-27 + 2024-11-28 daily https://autowarefoundation.github.io/autoware-documentation/pr-631/design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/srv/SetCooperationCommands/ - 2024-11-27 + 2024-11-28 daily https://autowarefoundation.github.io/autoware-documentation/pr-631/design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/srv/SetCooperationPolicies/ - 2024-11-27 + 2024-11-28 daily https://autowarefoundation.github.io/autoware-documentation/pr-631/design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/srv/SetDoorCommand/ - 2024-11-27 + 2024-11-28 daily https://autowarefoundation.github.io/autoware-documentation/pr-631/design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/srv/SetRoute/ - 2024-11-27 + 2024-11-28 daily https://autowarefoundation.github.io/autoware-documentation/pr-631/design/autoware-interfaces/ad-api/types/autoware_adapi_v1_msgs/srv/SetRoutePoints/ - 2024-11-27 + 2024-11-28 daily https://autowarefoundation.github.io/autoware-documentation/pr-631/design/autoware-interfaces/ad-api/types/autoware_adapi_version_msgs/srv/InterfaceVersion/ - 2024-11-27 + 2024-11-28 daily https://autowarefoundation.github.io/autoware-documentation/pr-631/design/autoware-interfaces/ad-api/use-cases/change-operation-mode/ - 2024-11-27 + 2024-11-28 daily https://autowarefoundation.github.io/autoware-documentation/pr-631/design/autoware-interfaces/ad-api/use-cases/drive-designated-position/ - 2024-11-27 + 2024-11-28 daily https://autowarefoundation.github.io/autoware-documentation/pr-631/design/autoware-interfaces/ad-api/use-cases/get-on-off/ - 2024-11-27 + 2024-11-28 daily https://autowarefoundation.github.io/autoware-documentation/pr-631/design/autoware-interfaces/ad-api/use-cases/initialize-pose/ - 2024-11-27 + 2024-11-28 daily https://autowarefoundation.github.io/autoware-documentation/pr-631/design/autoware-interfaces/ad-api/use-cases/launch-terminate/ - 2024-11-27 + 2024-11-28 daily https://autowarefoundation.github.io/autoware-documentation/pr-631/design/autoware-interfaces/ad-api/use-cases/system-monitoring/ - 2024-11-27 + 2024-11-28 daily https://autowarefoundation.github.io/autoware-documentation/pr-631/design/autoware-interfaces/ad-api/use-cases/vehicle-monitoring/ - 2024-11-27 + 2024-11-28 daily https://autowarefoundation.github.io/autoware-documentation/pr-631/design/autoware-interfaces/ad-api/use-cases/vehicle-operation/ - 2024-11-27 + 2024-11-28 daily https://autowarefoundation.github.io/autoware-documentation/pr-631/design/autoware-interfaces/ad-api/use-cases/manual-control/ - 2024-11-27 + 2024-11-28 daily https://autowarefoundation.github.io/autoware-documentation/pr-631/design/autoware-interfaces/components/ - 2024-11-27 + 2024-11-28 daily https://autowarefoundation.github.io/autoware-documentation/pr-631/design/autoware-interfaces/components/control/ - 2024-11-27 + 2024-11-28 daily https://autowarefoundation.github.io/autoware-documentation/pr-631/design/autoware-interfaces/components/localization/ - 2024-11-27 + 2024-11-28 daily https://autowarefoundation.github.io/autoware-documentation/pr-631/design/autoware-interfaces/components/map/ - 2024-11-27 + 2024-11-28 daily https://autowarefoundation.github.io/autoware-documentation/pr-631/design/autoware-interfaces/components/perception-interface/ - 2024-11-27 + 2024-11-28 daily https://autowarefoundation.github.io/autoware-documentation/pr-631/design/autoware-interfaces/components/perception/ - 2024-11-27 + 2024-11-28 daily https://autowarefoundation.github.io/autoware-documentation/pr-631/design/autoware-interfaces/components/planning/ - 2024-11-27 + 2024-11-28 daily https://autowarefoundation.github.io/autoware-documentation/pr-631/design/autoware-interfaces/components/sensing/ - 2024-11-27 + 2024-11-28 daily https://autowarefoundation.github.io/autoware-documentation/pr-631/design/autoware-interfaces/components/vehicle-dimensions/ - 2024-11-27 + 2024-11-28 daily https://autowarefoundation.github.io/autoware-documentation/pr-631/design/autoware-interfaces/components/vehicle-interface/ - 2024-11-27 + 2024-11-28 daily https://autowarefoundation.github.io/autoware-documentation/pr-631/design/configuration-management/ - 2024-11-27 + 2024-11-28 daily https://autowarefoundation.github.io/autoware-documentation/pr-631/design/configuration-management/development-process/ - 2024-11-27 + 2024-11-28 daily https://autowarefoundation.github.io/autoware-documentation/pr-631/design/configuration-management/release-process/ - 2024-11-27 + 2024-11-28 daily https://autowarefoundation.github.io/autoware-documentation/pr-631/design/configuration-management/repository-structure/ - 2024-11-27 + 2024-11-28 daily https://autowarefoundation.github.io/autoware-documentation/pr-631/how-to-guides/ - 2024-11-27 + 2024-11-28 daily https://autowarefoundation.github.io/autoware-documentation/pr-631/how-to-guides/integrating-autoware/overview/ - 2024-11-27 + 2024-11-28 daily https://autowarefoundation.github.io/autoware-documentation/pr-631/how-to-guides/integrating-autoware/awsim-integration/ - 2024-11-27 + 2024-11-28 daily https://autowarefoundation.github.io/autoware-documentation/pr-631/how-to-guides/integrating-autoware/creating-maps/ - 2024-11-27 + 2024-11-28 daily https://autowarefoundation.github.io/autoware-documentation/pr-631/how-to-guides/integrating-autoware/creating-maps/converting-utm-to-mgrs-map/ - 2024-11-27 + 2024-11-28 daily https://autowarefoundation.github.io/autoware-documentation/pr-631/how-to-guides/integrating-autoware/creating-maps/creating-vector-map/ - 2024-11-27 + 2024-11-28 daily https://autowarefoundation.github.io/autoware-documentation/pr-631/how-to-guides/integrating-autoware/creating-maps/creating-vector-map/crosswalk/ - 2024-11-27 + 2024-11-28 daily https://autowarefoundation.github.io/autoware-documentation/pr-631/how-to-guides/integrating-autoware/creating-maps/creating-vector-map/detection-area/ - 2024-11-27 + 2024-11-28 daily https://autowarefoundation.github.io/autoware-documentation/pr-631/how-to-guides/integrating-autoware/creating-maps/creating-vector-map/lanelet2/ - 2024-11-27 + 2024-11-28 daily https://autowarefoundation.github.io/autoware-documentation/pr-631/how-to-guides/integrating-autoware/creating-maps/creating-vector-map/speed-bump/ - 2024-11-27 + 2024-11-28 daily https://autowarefoundation.github.io/autoware-documentation/pr-631/how-to-guides/integrating-autoware/creating-maps/creating-vector-map/stop-line/ - 2024-11-27 + 2024-11-28 daily https://autowarefoundation.github.io/autoware-documentation/pr-631/how-to-guides/integrating-autoware/creating-maps/creating-vector-map/traffic-light/ - 2024-11-27 + 2024-11-28 daily https://autowarefoundation.github.io/autoware-documentation/pr-631/how-to-guides/integrating-autoware/creating-maps/open-source-slam/ - 2024-11-27 + 2024-11-28 daily https://autowarefoundation.github.io/autoware-documentation/pr-631/how-to-guides/integrating-autoware/creating-maps/open-source-slam/fast-lio-lc/ - 2024-11-27 + 2024-11-28 daily https://autowarefoundation.github.io/autoware-documentation/pr-631/how-to-guides/integrating-autoware/creating-maps/open-source-slam/fast-lio-slam/ - 2024-11-27 + 2024-11-28 daily https://autowarefoundation.github.io/autoware-documentation/pr-631/how-to-guides/integrating-autoware/creating-maps/open-source-slam/fd-slam/ - 2024-11-27 + 2024-11-28 daily https://autowarefoundation.github.io/autoware-documentation/pr-631/how-to-guides/integrating-autoware/creating-maps/open-source-slam/hdl-graph-slam/ - 2024-11-27 + 2024-11-28 daily https://autowarefoundation.github.io/autoware-documentation/pr-631/how-to-guides/integrating-autoware/creating-maps/open-source-slam/ia-lio-slam/ - 2024-11-27 + 2024-11-28 daily https://autowarefoundation.github.io/autoware-documentation/pr-631/how-to-guides/integrating-autoware/creating-maps/open-source-slam/iscloam/ - 2024-11-27 + 2024-11-28 daily https://autowarefoundation.github.io/autoware-documentation/pr-631/how-to-guides/integrating-autoware/creating-maps/open-source-slam/lego-loam-bor/ - 2024-11-27 + 2024-11-28 daily https://autowarefoundation.github.io/autoware-documentation/pr-631/how-to-guides/integrating-autoware/creating-maps/open-source-slam/lio-sam/ - 2024-11-27 + 2024-11-28 daily https://autowarefoundation.github.io/autoware-documentation/pr-631/how-to-guides/integrating-autoware/creating-maps/open-source-slam/optimized-sc-f-loam/ - 2024-11-27 + 2024-11-28 daily https://autowarefoundation.github.io/autoware-documentation/pr-631/how-to-guides/integrating-autoware/creating-maps/open-source-slam/sc-a-loam/ - 2024-11-27 + 2024-11-28 daily https://autowarefoundation.github.io/autoware-documentation/pr-631/how-to-guides/integrating-autoware/creating-maps/open-source-slam/sc-lego-loam/ - 2024-11-27 + 2024-11-28 daily https://autowarefoundation.github.io/autoware-documentation/pr-631/how-to-guides/integrating-autoware/creating-maps/pointcloud-map-downsampling/ - 2024-11-27 + 2024-11-28 daily https://autowarefoundation.github.io/autoware-documentation/pr-631/how-to-guides/integrating-autoware/creating-vehicle-and-sensor-model/ - 2024-11-27 + 2024-11-28 daily https://autowarefoundation.github.io/autoware-documentation/pr-631/how-to-guides/integrating-autoware/creating-vehicle-and-sensor-model/calibrating-sensors/ - 2024-11-27 + 2024-11-28 daily https://autowarefoundation.github.io/autoware-documentation/pr-631/how-to-guides/integrating-autoware/creating-vehicle-and-sensor-model/calibrating-sensors/calibration-tools/ - 2024-11-27 + 2024-11-28 daily https://autowarefoundation.github.io/autoware-documentation/pr-631/how-to-guides/integrating-autoware/creating-vehicle-and-sensor-model/calibrating-sensors/extrinsic-manual-calibration/ - 2024-11-27 + 2024-11-28 daily https://autowarefoundation.github.io/autoware-documentation/pr-631/how-to-guides/integrating-autoware/creating-vehicle-and-sensor-model/calibrating-sensors/ground-lidar-calibration/ - 2024-11-27 + 2024-11-28 daily https://autowarefoundation.github.io/autoware-documentation/pr-631/how-to-guides/integrating-autoware/creating-vehicle-and-sensor-model/calibrating-sensors/intrinsic-camera-calibration/ - 2024-11-27 + 2024-11-28 daily https://autowarefoundation.github.io/autoware-documentation/pr-631/how-to-guides/integrating-autoware/creating-vehicle-and-sensor-model/calibrating-sensors/lidar-camera-calibration/ - 2024-11-27 + 2024-11-28 daily https://autowarefoundation.github.io/autoware-documentation/pr-631/how-to-guides/integrating-autoware/creating-vehicle-and-sensor-model/calibrating-sensors/lidar-imu-calibration/ - 2024-11-27 + 2024-11-28 daily https://autowarefoundation.github.io/autoware-documentation/pr-631/how-to-guides/integrating-autoware/creating-vehicle-and-sensor-model/calibrating-sensors/lidar-lidar-calibration/ - 2024-11-27 + 2024-11-28 daily https://autowarefoundation.github.io/autoware-documentation/pr-631/how-to-guides/integrating-autoware/creating-vehicle-and-sensor-model/creating-individual-params/ - 2024-11-27 + 2024-11-28 daily https://autowarefoundation.github.io/autoware-documentation/pr-631/how-to-guides/integrating-autoware/creating-vehicle-and-sensor-model/creating-sensor-model/ - 2024-11-27 + 2024-11-28 daily https://autowarefoundation.github.io/autoware-documentation/pr-631/how-to-guides/integrating-autoware/creating-vehicle-and-sensor-model/creating-vehicle-model/ - 2024-11-27 + 2024-11-28 daily https://autowarefoundation.github.io/autoware-documentation/pr-631/how-to-guides/integrating-autoware/creating-vehicle-interface-package/ackermann-kinematic-model/ - 2024-11-27 + 2024-11-28 daily https://autowarefoundation.github.io/autoware-documentation/pr-631/how-to-guides/integrating-autoware/creating-vehicle-interface-package/creating-vehicle-interface/ - 2024-11-27 + 2024-11-28 daily https://autowarefoundation.github.io/autoware-documentation/pr-631/how-to-guides/integrating-autoware/creating-vehicle-interface-package/customizing-for-differential-drive-model/ - 2024-11-27 + 2024-11-28 daily https://autowarefoundation.github.io/autoware-documentation/pr-631/how-to-guides/integrating-autoware/creating-vehicle-interface-package/vehicle-interface/ - 2024-11-27 + 2024-11-28 daily https://autowarefoundation.github.io/autoware-documentation/pr-631/how-to-guides/integrating-autoware/creating-your-autoware-repositories/creating-autoware-repositories/ - 2024-11-27 + 2024-11-28 daily https://autowarefoundation.github.io/autoware-documentation/pr-631/how-to-guides/integrating-autoware/launch-autoware/ - 2024-11-27 + 2024-11-28 daily https://autowarefoundation.github.io/autoware-documentation/pr-631/how-to-guides/integrating-autoware/launch-autoware/control/ - 2024-11-27 + 2024-11-28 daily https://autowarefoundation.github.io/autoware-documentation/pr-631/how-to-guides/integrating-autoware/launch-autoware/localization/ - 2024-11-27 + 2024-11-28 daily https://autowarefoundation.github.io/autoware-documentation/pr-631/how-to-guides/integrating-autoware/launch-autoware/localization/eagleye/ - 2024-11-27 + 2024-11-28 daily https://autowarefoundation.github.io/autoware-documentation/pr-631/how-to-guides/integrating-autoware/launch-autoware/map/ - 2024-11-27 + 2024-11-28 daily https://autowarefoundation.github.io/autoware-documentation/pr-631/how-to-guides/integrating-autoware/launch-autoware/perception/ - 2024-11-27 + 2024-11-28 daily https://autowarefoundation.github.io/autoware-documentation/pr-631/how-to-guides/integrating-autoware/launch-autoware/planning/ - 2024-11-27 + 2024-11-28 daily https://autowarefoundation.github.io/autoware-documentation/pr-631/how-to-guides/integrating-autoware/launch-autoware/sensing/ - 2024-11-27 + 2024-11-28 daily https://autowarefoundation.github.io/autoware-documentation/pr-631/how-to-guides/integrating-autoware/launch-autoware/system/ - 2024-11-27 + 2024-11-28 daily https://autowarefoundation.github.io/autoware-documentation/pr-631/how-to-guides/integrating-autoware/launch-autoware/vehicle/ - 2024-11-27 + 2024-11-28 daily https://autowarefoundation.github.io/autoware-documentation/pr-631/how-to-guides/integrating-autoware/tuning-parameters-and-performance/evaluating-controller-performance/ - 2024-11-27 + 2024-11-28 daily https://autowarefoundation.github.io/autoware-documentation/pr-631/how-to-guides/integrating-autoware/tuning-parameters-and-performance/evaluating-real-time-performance/ - 2024-11-27 + 2024-11-28 daily https://autowarefoundation.github.io/autoware-documentation/pr-631/how-to-guides/others/add-a-custom-ros-message/ - 2024-11-27 + 2024-11-28 daily https://autowarefoundation.github.io/autoware-documentation/pr-631/how-to-guides/others/advanced-usage-of-colcon/ - 2024-11-27 + 2024-11-28 daily https://autowarefoundation.github.io/autoware-documentation/pr-631/how-to-guides/others/an-example-procedure-for-adding-and-evaluating-a-new-node/ - 2024-11-27 + 2024-11-28 daily https://autowarefoundation.github.io/autoware-documentation/pr-631/how-to-guides/others/applying-clang-tidy-to-ros-packages/ - 2024-11-27 + 2024-11-28 daily https://autowarefoundation.github.io/autoware-documentation/pr-631/how-to-guides/others/debug-autoware/ - 2024-11-27 + 2024-11-28 daily https://autowarefoundation.github.io/autoware-documentation/pr-631/how-to-guides/others/defining-temporal-performance-metrics/ - 2024-11-27 + 2024-11-28 daily https://autowarefoundation.github.io/autoware-documentation/pr-631/how-to-guides/others/determining-component-dependencies/ - 2024-11-27 + 2024-11-28 daily https://autowarefoundation.github.io/autoware-documentation/pr-631/how-to-guides/others/fixing-dependent-package-versions/ - 2024-11-27 + 2024-11-28 daily https://autowarefoundation.github.io/autoware-documentation/pr-631/how-to-guides/others/lane-detection-methods/ - 2024-11-27 + 2024-11-28 daily https://autowarefoundation.github.io/autoware-documentation/pr-631/how-to-guides/others/planning-evaluation-using-scenarios/ - 2024-11-27 + 2024-11-28 daily https://autowarefoundation.github.io/autoware-documentation/pr-631/how-to-guides/others/reducing-start-delays/ - 2024-11-27 + 2024-11-28 daily https://autowarefoundation.github.io/autoware-documentation/pr-631/how-to-guides/others/running-autoware-without-cuda/ - 2024-11-27 + 2024-11-28 daily https://autowarefoundation.github.io/autoware-documentation/pr-631/how-to-guides/others/using-divided-map/ - 2024-11-27 + 2024-11-28 daily https://autowarefoundation.github.io/autoware-documentation/pr-631/how-to-guides/training-machine-learning-models/training-models/ - 2024-11-27 + 2024-11-28 daily https://autowarefoundation.github.io/autoware-documentation/pr-631/installation/ - 2024-11-27 + 2024-11-28 daily https://autowarefoundation.github.io/autoware-documentation/pr-631/installation/additional-settings-for-developers/ - 2024-11-27 + 2024-11-28 daily https://autowarefoundation.github.io/autoware-documentation/pr-631/installation/additional-settings-for-developers/console-settings/ - 2024-11-27 + 2024-11-28 daily https://autowarefoundation.github.io/autoware-documentation/pr-631/installation/additional-settings-for-developers/network-configuration/ - 2024-11-27 + 2024-11-28 daily https://autowarefoundation.github.io/autoware-documentation/pr-631/installation/additional-settings-for-developers/network-configuration/dds-settings/ - 2024-11-27 + 2024-11-28 daily https://autowarefoundation.github.io/autoware-documentation/pr-631/installation/additional-settings-for-developers/network-configuration/enable-multicast-for-lo/ - 2024-11-27 + 2024-11-28 daily https://autowarefoundation.github.io/autoware-documentation/pr-631/installation/additional-settings-for-developers/network-configuration/multiple-computers/ - 2024-11-27 + 2024-11-28 daily https://autowarefoundation.github.io/autoware-documentation/pr-631/installation/autoware/docker-installation/ - 2024-11-27 + 2024-11-28 daily https://autowarefoundation.github.io/autoware-documentation/pr-631/installation/autoware/source-installation/ - 2024-11-27 + 2024-11-28 daily https://autowarefoundation.github.io/autoware-documentation/pr-631/installation/related-tools/ - 2024-11-27 + 2024-11-28 daily https://autowarefoundation.github.io/autoware-documentation/pr-631/models/ - 2024-11-27 + 2024-11-28 daily https://autowarefoundation.github.io/autoware-documentation/pr-631/reference-hw/ - 2024-11-27 + 2024-11-28 daily https://autowarefoundation.github.io/autoware-documentation/pr-631/reference-hw/ad-computers/ - 2024-11-27 + 2024-11-28 daily https://autowarefoundation.github.io/autoware-documentation/pr-631/reference-hw/ad_sensor_kit_suppliers/ - 2024-11-27 + 2024-11-28 daily https://autowarefoundation.github.io/autoware-documentation/pr-631/reference-hw/cameras/ - 2024-11-27 + 2024-11-28 daily https://autowarefoundation.github.io/autoware-documentation/pr-631/reference-hw/full_drivers_list/ - 2024-11-27 + 2024-11-28 daily https://autowarefoundation.github.io/autoware-documentation/pr-631/reference-hw/imu_ahrs_gnss_ins/ - 2024-11-27 + 2024-11-28 daily https://autowarefoundation.github.io/autoware-documentation/pr-631/reference-hw/lidars/ - 2024-11-27 + 2024-11-28 daily https://autowarefoundation.github.io/autoware-documentation/pr-631/reference-hw/radars/ - 2024-11-27 + 2024-11-28 daily https://autowarefoundation.github.io/autoware-documentation/pr-631/reference-hw/remote_drive/ - 2024-11-27 + 2024-11-28 daily https://autowarefoundation.github.io/autoware-documentation/pr-631/reference-hw/thermal_cameras/ - 2024-11-27 + 2024-11-28 daily https://autowarefoundation.github.io/autoware-documentation/pr-631/reference-hw/vehicle_drive_by_wire_suppliers/ - 2024-11-27 + 2024-11-28 daily https://autowarefoundation.github.io/autoware-documentation/pr-631/reference-hw/vehicle_platform_suppliers/ - 2024-11-27 + 2024-11-28 daily https://autowarefoundation.github.io/autoware-documentation/pr-631/support/ - 2024-11-27 + 2024-11-28 daily https://autowarefoundation.github.io/autoware-documentation/pr-631/support/docs-guide/ - 2024-11-27 + 2024-11-28 daily https://autowarefoundation.github.io/autoware-documentation/pr-631/support/support-guidelines/ - 2024-11-27 + 2024-11-28 daily https://autowarefoundation.github.io/autoware-documentation/pr-631/support/troubleshooting/ - 2024-11-27 + 2024-11-28 daily https://autowarefoundation.github.io/autoware-documentation/pr-631/support/troubleshooting/performance-troubleshooting/ - 2024-11-27 + 2024-11-28 daily https://autowarefoundation.github.io/autoware-documentation/pr-631/tutorials/ - 2024-11-27 + 2024-11-28 daily https://autowarefoundation.github.io/autoware-documentation/pr-631/tutorials/ad-hoc-simulation/ - 2024-11-27 + 2024-11-28 daily https://autowarefoundation.github.io/autoware-documentation/pr-631/tutorials/ad-hoc-simulation/planning-simulation/ - 2024-11-27 + 2024-11-28 daily https://autowarefoundation.github.io/autoware-documentation/pr-631/tutorials/ad-hoc-simulation/rosbag-replay-simulation/ - 2024-11-27 + 2024-11-28 daily https://autowarefoundation.github.io/autoware-documentation/pr-631/tutorials/ad-hoc-simulation/digital-twin-simulation/MORAI_Sim-tutorial/ - 2024-11-27 + 2024-11-28 daily https://autowarefoundation.github.io/autoware-documentation/pr-631/tutorials/ad-hoc-simulation/digital-twin-simulation/awsim-tutorial/ - 2024-11-27 + 2024-11-28 daily https://autowarefoundation.github.io/autoware-documentation/pr-631/tutorials/ad-hoc-simulation/digital-twin-simulation/carla-tutorial/ - 2024-11-27 + 2024-11-28 daily https://autowarefoundation.github.io/autoware-documentation/pr-631/tutorials/components_evaluation/ - 2024-11-27 + 2024-11-28 daily https://autowarefoundation.github.io/autoware-documentation/pr-631/tutorials/components_evaluation/localization_evaluation/urban-environment-evaluation/ - 2024-11-27 + 2024-11-28 daily https://autowarefoundation.github.io/autoware-documentation/pr-631/tutorials/scenario-simulation/ - 2024-11-27 + 2024-11-28 daily https://autowarefoundation.github.io/autoware-documentation/pr-631/tutorials/scenario-simulation/planning-simulation/installation/ - 2024-11-27 + 2024-11-28 daily https://autowarefoundation.github.io/autoware-documentation/pr-631/tutorials/scenario-simulation/planning-simulation/random-test-simulation/ - 2024-11-27 + 2024-11-28 daily https://autowarefoundation.github.io/autoware-documentation/pr-631/tutorials/scenario-simulation/planning-simulation/scenario-test-simulation/ - 2024-11-27 + 2024-11-28 daily https://autowarefoundation.github.io/autoware-documentation/pr-631/tutorials/scenario-simulation/rosbag-replay-simulation/driving-log-replayer/ - 2024-11-27 + 2024-11-28 daily \ No newline at end of file diff --git a/pr-631/sitemap.xml.gz b/pr-631/sitemap.xml.gz index 8ec27d84f3b..77edd9ee7f4 100644 Binary files a/pr-631/sitemap.xml.gz and b/pr-631/sitemap.xml.gz differ