Skip to content

Commit

Permalink
chore: update URLs for v1.0 in README
Browse files Browse the repository at this point in the history
Signed-off-by: Ryohsuke Mitsudome <[email protected]>
  • Loading branch information
mitsudome-r committed Feb 2, 2024
1 parent 90d5b94 commit 05e0d1f
Show file tree
Hide file tree
Showing 36 changed files with 68 additions and 68 deletions.
2 changes: 1 addition & 1 deletion common/tier4_logging_level_configure_rviz_plugin/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,4 +6,4 @@ This package provides an rviz_plugin that can easily change the logger level of

This plugin dispatches services to the "logger name" associated with "nodes" specified in YAML, adjusting the logger level.

As of November 2023, in ROS 2 Humble, users are required to initiate a service server in the node to use this feature. (This might be integrated into ROS standards in the future.) For easy service server generation, you can use the [LoggerLevelConfigure](https://github.com/autowarefoundation/autoware.universe/blob/main/common/tier4_autoware_utils/include/tier4_autoware_utils/ros/logger_level_configure.hpp) utility.
As of November 2023, in ROS 2 Humble, users are required to initiate a service server in the node to use this feature. (This might be integrated into ROS standards in the future.) For easy service server generation, you can use the [LoggerLevelConfigure](https://github.com/autowarefoundation/autoware.universe/blob/v1.0/common/tier4_autoware_utils/include/tier4_autoware_utils/ros/logger_level_configure.hpp) utility.
2 changes: 1 addition & 1 deletion localization/pose_initializer/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -50,6 +50,6 @@ This node depends on the map height fitter library.

## Connection with Default AD API

This `pose_initializer` is used via default AD API. For detailed description of the API description, please refer to [the description of `default_ad_api`](https://github.com/autowarefoundation/autoware.universe/blob/main/system/default_ad_api/document/localization.md).
This `pose_initializer` is used via default AD API. For detailed description of the API description, please refer to [the description of `default_ad_api`](https://github.com/autowarefoundation/autoware.universe/blob/v1.0/system/default_ad_api/document/localization.md).

<img src="../../system/default_ad_api/document/images/localization.drawio.svg" alt="drawing" width="800"/>
2 changes: 1 addition & 1 deletion localization/yabloc/yabloc_pose_initializer/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@ This package contains a node related to initial pose estimation.

- [camera_pose_initializer](#camera_pose_initializer)

This package requires the pre-trained semantic segmentation model for runtime. This model is usually downloaded by `ansible` during env preparation phase of the [installation](https://autowarefoundation.github.io/autoware-documentation/main/installation/autoware/source-installation/).
This package requires the pre-trained semantic segmentation model for runtime. This model is usually downloaded by `ansible` during env preparation phase of the [installation](https://autowarefoundation.github.io/autoware-documentation/v1.0/installation/autoware/source-installation/).
It is also possible to download it manually. Even if the model is not downloaded, initialization will still complete, but the accuracy may be compromised.

To download and extract the model manually:
Expand Down
14 changes: 7 additions & 7 deletions map/map_loader/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -22,14 +22,14 @@ NOTE: **We strongly recommend to use divided maps when using large pointcloud ma

You may provide either a single .pcd file or multiple .pcd files. If you are using multiple PCD data and either of `enable_partial_load`, `enable_differential_load` or `enable_selected_load` are set true, it MUST obey the following rules:

1. **The pointcloud map should be projected on the same coordinate defined in `map_projection_loader`**, in order to be consistent with the lanelet2 map and other packages that converts between local and geodetic coordinates. For more information, please refer to [the readme of `map_projection_loader`](https://github.com/autowarefoundation/autoware.universe/tree/main/map/map_projection_loader/README.md).
1. **The pointcloud map should be projected on the same coordinate defined in `map_projection_loader`**, in order to be consistent with the lanelet2 map and other packages that converts between local and geodetic coordinates. For more information, please refer to [the readme of `map_projection_loader`](https://github.com/autowarefoundation/autoware.universe/tree/v1.0/map/map_projection_loader/README.md).
2. **It must be divided by straight lines parallel to the x-axis and y-axis**. The system does not support division by diagonal lines or curved lines.
3. **The division size along each axis should be equal.**
4. **The division size should be about 20m x 20m.** Particularly, care should be taken as using too large division size (for example, more than 100m) may have adverse effects on dynamic map loading features in [ndt_scan_matcher](https://github.com/autowarefoundation/autoware.universe/tree/main/localization/ndt_scan_matcher) and [compare_map_segmentation](https://github.com/autowarefoundation/autoware.universe/tree/main/perception/compare_map_segmentation).
4. **The division size should be about 20m x 20m.** Particularly, care should be taken as using too large division size (for example, more than 100m) may have adverse effects on dynamic map loading features in [ndt_scan_matcher](https://github.com/autowarefoundation/autoware.universe/tree/v1.0/localization/ndt_scan_matcher) and [compare_map_segmentation](https://github.com/autowarefoundation/autoware.universe/tree/v1.0/perception/compare_map_segmentation).
5. **All the split maps should not overlap with each other.**
6. **Metadata file should also be provided.** The metadata structure description is provided below.

Note that these rules are not applicable when `enable_partial_load`, `enable_differential_load` and `enable_selected_load` are all set false. In this case, however, you also need to disable dynamic map loading mode for other nodes as well ([ndt_scan_matcher](https://github.com/autowarefoundation/autoware.universe/tree/main/localization/ndt_scan_matcher) and [compare_map_segmentation](https://github.com/autowarefoundation/autoware.universe/tree/main/perception/compare_map_segmentation) as of June 2023).
Note that these rules are not applicable when `enable_partial_load`, `enable_differential_load` and `enable_selected_load` are all set false. In this case, however, you also need to disable dynamic map loading mode for other nodes as well ([ndt_scan_matcher](https://github.com/autowarefoundation/autoware.universe/tree/v1.0/localization/ndt_scan_matcher) and [compare_map_segmentation](https://github.com/autowarefoundation/autoware.universe/tree/v1.0/perception/compare_map_segmentation) as of June 2023).

#### Metadata structure

Expand Down Expand Up @@ -88,28 +88,28 @@ The node publishes the downsampled pointcloud map loaded from the `.pcd` file(s)

#### Publish metadata of pointcloud map (ROS 2 topic)

The node publishes the pointcloud metadata attached with an ID. Metadata is loaded from the `.yaml` file. Please see [the description of `PointCloudMapMetaData.msg`](https://github.com/autowarefoundation/autoware_msgs/tree/main/autoware_map_msgs#pointcloudmapmetadatamsg) for details.
The node publishes the pointcloud metadata attached with an ID. Metadata is loaded from the `.yaml` file. Please see [the description of `PointCloudMapMetaData.msg`](https://github.com/autowarefoundation/autoware_msgs/tree/v1.0/autoware_map_msgs#pointcloudmapmetadatamsg) for details.

#### Send partial pointcloud map (ROS 2 service)

Here, we assume that the pointcloud maps are divided into grids.

Given a query from a client node, the node sends a set of pointcloud maps that overlaps with the queried area.
Please see [the description of `GetPartialPointCloudMap.srv`](https://github.com/autowarefoundation/autoware_msgs/tree/main/autoware_map_msgs#getpartialpointcloudmapsrv) for details.
Please see [the description of `GetPartialPointCloudMap.srv`](https://github.com/autowarefoundation/autoware_msgs/tree/v1.0/autoware_map_msgs#getpartialpointcloudmapsrv) for details.

#### Send differential pointcloud map (ROS 2 service)

Here, we assume that the pointcloud maps are divided into grids.

Given a query and set of map IDs, the node sends a set of pointcloud maps that overlap with the queried area and are not included in the set of map IDs.
Please see [the description of `GetDifferentialPointCloudMap.srv`](https://github.com/autowarefoundation/autoware_msgs/tree/main/autoware_map_msgs#getdifferentialpointcloudmapsrv) for details.
Please see [the description of `GetDifferentialPointCloudMap.srv`](https://github.com/autowarefoundation/autoware_msgs/tree/v1.0/autoware_map_msgs#getdifferentialpointcloudmapsrv) for details.

#### Send selected pointcloud map (ROS 2 service)

Here, we assume that the pointcloud maps are divided into grids.

Given IDs query from a client node, the node sends a set of pointcloud maps (each of which attached with unique ID) specified by query.
Please see [the description of `GetSelectedPointCloudMap.srv`](https://github.com/autowarefoundation/autoware_msgs/tree/main/autoware_map_msgs#getselectedpointcloudmapsrv) for details.
Please see [the description of `GetSelectedPointCloudMap.srv`](https://github.com/autowarefoundation/autoware_msgs/tree/v1.0/autoware_map_msgs#getselectedpointcloudmapsrv) for details.

### Parameters

Expand Down
2 changes: 1 addition & 1 deletion map/map_projection_loader/src/map_projection_loader.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -73,7 +73,7 @@ MapProjectionLoader::MapProjectionLoader() : Node("map_projection_loader")
this->get_logger(),
"DEPRECATED WARNING: Loading map projection info from lanelet2 map may soon be deleted. "
"Please use map_projector_info.yaml instead. For more info, visit "
"https://github.com/autowarefoundation/autoware.universe/blob/main/map/map_projection_loader/"
"https://github.com/autowarefoundation/autoware.universe/blob/v1.0/map/map_projection_loader/"
"README.md");
msg = load_info_from_lanelet2_map(lanelet2_map_filename);
}
Expand Down
2 changes: 1 addition & 1 deletion perception/lidar_apollo_segmentation_tvm/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@
#### Neural network

This package will not run without a neural network for its inference.
The network is provided by ansible script during the installation of Autoware or can be downloaded manually according to [Manual Downloading](https://github.com/autowarefoundation/autoware/tree/main/ansible/roles/artifacts).
The network is provided by ansible script during the installation of Autoware or can be downloaded manually according to [Manual Downloading](https://github.com/autowarefoundation/autoware/tree/v1.0/ansible/roles/artifacts).
This package uses 'get_neural_network' function from tvm_utility package to create and provide proper dependency.
See its design page for more information on how to handle user-compiled networks.

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@ This tutorial is for showing `centerpoint` and `centerpoint_tiny`models’ resul

## Setup Development Environment

Follow the steps in the Source Installation ([link](https://autowarefoundation.github.io/autoware-documentation/main/installation/autoware/source-installation/)) in Autoware doc.
Follow the steps in the Source Installation ([link](https://autowarefoundation.github.io/autoware-documentation/v1.0/installation/autoware/source-installation/)) in Autoware doc.

If you fail to build autoware environment according to lack of memory, then it is recommended to build autoware sequentially.

Expand Down Expand Up @@ -42,7 +42,7 @@ ros2 bag play /YOUR/ROSBAG/PATH/ --clock 100

Don't forget to add `clock` in order to sync between two rviz display.

You can also use the sample rosbag provided by autoware [here](https://autowarefoundation.github.io/autoware-documentation/main/tutorials/ad-hoc-simulation/rosbag-replay-simulation/).
You can also use the sample rosbag provided by autoware [here](https://autowarefoundation.github.io/autoware-documentation/v1.0/tutorials/ad-hoc-simulation/rosbag-replay-simulation/).

If you want to merge several rosbags into one, you can refer to [this tool](https://github.com/jerry73204/rosbag2-merge).

Expand Down
2 changes: 1 addition & 1 deletion perception/object_merger/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -33,7 +33,7 @@ The successive shortest path algorithm is used to solve the data association pro
| `min_area_matrix` | double | Minimum area table for data association |
| `max_rad_matrix` | double | Maximum angle table for data association |
| `base_link_frame_id` | double | association frame |
| `distance_threshold_list` | `std::vector<double>` | Distance threshold for each class used in judging overlap. The class order depends on [ObjectClassification](https://github.com/tier4/autoware_auto_msgs/blob/tier4/main/autoware_auto_perception_msgs/msg/ObjectClassification.idl). |
| `distance_threshold_list` | `std::vector<double>` | Distance threshold for each class used in judging overlap. The class order depends on [ObjectClassification](https://github.com/tier4/autoware_auto_msgs/blob/tier4/v1.0/autoware_auto_perception_msgs/msg/ObjectClassification.idl). |
| `generalized_iou_threshold` | `std::vector<double>` | Generalized IoU threshold for each class |

## Tips
Expand Down
2 changes: 1 addition & 1 deletion perception/radar_crossing_objects_noise_filter/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -24,7 +24,7 @@ Velocity estimation fails on static objects, resulting in ghost objects crossing

- 2. Turning around by ego vehicle affect the output from radar.

When the ego vehicle turns around, the radars outputting at the object level sometimes fail to estimate the twist of objects correctly even if [radar_tracks_msgs_converter](https://github.com/autowarefoundation/autoware.universe/tree/main/perception/radar_tracks_msgs_converter) compensates by the ego vehicle twist.
When the ego vehicle turns around, the radars outputting at the object level sometimes fail to estimate the twist of objects correctly even if [radar_tracks_msgs_converter](https://github.com/autowarefoundation/autoware.universe/tree/v1.0/perception/radar_tracks_msgs_converter) compensates by the ego vehicle twist.
So if an object detected by radars has circular motion viewing from base_link, it is likely that the speed is estimated incorrectly and that the object is a static object.

The example is below figure.
Expand Down
4 changes: 2 additions & 2 deletions perception/radar_object_clustering/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@

This package contains a radar object clustering for [autoware_auto_perception_msgs/msg/DetectedObject](https://gitlab.com/autowarefoundation/autoware.auto/autoware_auto_msgs/-/blob/master/autoware_auto_perception_msgs/msg/DetectedObject.idl) input.

This package can make clustered objects from radar DetectedObjects, the objects which is converted from RadarTracks by [radar_tracks_msgs_converter](https://github.com/autowarefoundation/autoware.universe/tree/main/perception/radar_tracks_msgs_converter) and is processed by noise filter.
This package can make clustered objects from radar DetectedObjects, the objects which is converted from RadarTracks by [radar_tracks_msgs_converter](https://github.com/autowarefoundation/autoware.universe/tree/v1.0/perception/radar_tracks_msgs_converter) and is processed by noise filter.
In other word, this package can combine multiple radar detections from one object into one and adjust class and size.

![radar_clustering](docs/radar_clustering.drawio.svg)
Expand Down Expand Up @@ -44,7 +44,7 @@ When the size information from radar outputs lack accuracy, `is_fixed_size` para
If the parameter is true, the size of a clustered object is overwritten by the label set by `size_x`, `size_y`, and `size_z` parameters.
If this package use for faraway dynamic object detection with radar, the parameter is recommended to set to
`size_x`, `size_y`, `size_z`, as average of vehicle size.
Note that to use for [multi_objects_tracker](https://github.com/autowarefoundation/autoware.universe/tree/main/perception/multi_object_tracker), the size parameters need to exceed `min_area_matrix` parameters of it.
Note that to use for [multi_objects_tracker](https://github.com/autowarefoundation/autoware.universe/tree/v1.0/perception/multi_object_tracker), the size parameters need to exceed `min_area_matrix` parameters of it.

### Limitation

Expand Down
10 changes: 5 additions & 5 deletions perception/simple_object_merger/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -7,9 +7,9 @@ This package can merge multiple topics of [autoware_auto_perception_msgs/msg/Det
### Background

This package can merge multiple DetectedObjects without matching processing.
[Object_merger](https://github.com/autowarefoundation/autoware.universe/tree/main/perception/object_merger) solve data association algorithm like Hungarian algorithm for matching problem, but it needs computational cost.
In addition, for now, [object_merger](https://github.com/autowarefoundation/autoware.universe/tree/main/perception/object_merger) can handle only 2 DetectedObjects topics and cannot handle more than 2 topics in one node.
To merge 6 DetectedObjects topics, 6 [object_merger](https://github.com/autowarefoundation/autoware.universe/tree/main/perception/object_merger) nodes need to stand.
[Object_merger](https://github.com/autowarefoundation/autoware.universe/tree/v1.0/perception/object_merger) solve data association algorithm like Hungarian algorithm for matching problem, but it needs computational cost.
In addition, for now, [object_merger](https://github.com/autowarefoundation/autoware.universe/tree/v1.0/perception/object_merger) can handle only 2 DetectedObjects topics and cannot handle more than 2 topics in one node.
To merge 6 DetectedObjects topics, 6 [object_merger](https://github.com/autowarefoundation/autoware.universe/tree/v1.0/perception/object_merger) nodes need to stand.

So this package aim to merge DetectedObjects simply.
This package do not use data association algorithm to reduce the computational cost, and it can handle more than 2 topics in one node to prevent launching a large number of nodes.
Expand All @@ -27,7 +27,7 @@ The timeout parameter should be determined by sensor cycle time.
- Post-processing

Because this package does not have matching processing, so it can be used only when post-processing is used.
For now, [clustering processing](https://github.com/autowarefoundation/autoware.universe/tree/main/perception/radar_object_clustering) can be used as post-processing.
For now, [clustering processing](https://github.com/autowarefoundation/autoware.universe/tree/v1.0/perception/radar_object_clustering) can be used as post-processing.

### Use case

Expand All @@ -36,7 +36,7 @@ Use case is as below.
- Multiple radar detection

This package can be used for multiple radar detection.
Since [clustering processing](https://github.com/autowarefoundation/autoware.universe/tree/main/perception/radar_object_clustering) will be included later process in radar faraway detection, this package can be used.
Since [clustering processing](https://github.com/autowarefoundation/autoware.universe/tree/v1.0/perception/radar_object_clustering) will be included later process in radar faraway detection, this package can be used.

## Input

Expand Down
2 changes: 1 addition & 1 deletion perception/tensorrt_yolo/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -72,7 +72,7 @@ This package includes multiple licenses.

All YOLO ONNX models are converted from the officially trained model. If you need information about training datasets and conditions, please refer to the official repositories.

All models are downloaded during env preparation by ansible (as mention in [installation](https://autowarefoundation.github.io/autoware-documentation/main/installation/autoware/source-installation/)). It is also possible to download them manually, see [Manual downloading of artifacts](https://github.com/autowarefoundation/autoware/tree/main/ansible/roles/artifacts) . When launching the node with a model for the first time, the model is automatically converted to TensorRT, although this may take some time.
All models are downloaded during env preparation by ansible (as mention in [installation](https://autowarefoundation.github.io/autoware-documentation/v1.0/installation/autoware/source-installation/)). It is also possible to download them manually, see [Manual downloading of artifacts](https://github.com/autowarefoundation/autoware/tree/v1.0/ansible/roles/artifacts) . When launching the node with a model for the first time, the model is automatically converted to TensorRT, although this may take some time.

### YOLOv3

Expand Down
Loading

0 comments on commit 05e0d1f

Please sign in to comment.