From 6ea3577e0e7bb7957a3929695ccf7baf1da341f6 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Kaan=20=C3=87olak?= Date: Tue, 14 Nov 2023 00:03:30 +0300 Subject: [PATCH 01/12] refactor(lidar_centerpoint): add training docs MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Signed-off-by: Kaan Çolak --- perception/lidar_centerpoint/README.md | 173 +++++++++++++++++++++++++ 1 file changed, 173 insertions(+) diff --git a/perception/lidar_centerpoint/README.md b/perception/lidar_centerpoint/README.md index ed71349d5bd7f..2402058732580 100644 --- a/perception/lidar_centerpoint/README.md +++ b/perception/lidar_centerpoint/README.md @@ -68,6 +68,173 @@ You can download the onnx format of trained models by clicking on the links belo `Centerpoint` was trained in `nuScenes` (~28k lidar frames) [8] and TIER IV's internal database (~11k lidar frames) for 60 epochs. `Centerpoint tiny` was trained in `Argoverse 2` (~110k lidar frames) [9] and TIER IV's internal database (~11k lidar frames) for 20 epochs. +## Training CenterPoint Model and Deploying to the Autoware + +### Overview +This guide provides instructions on training a CenterPoint model using the **mmdetection3d** repository +and seamlessly deploying it within the Autoware. + + +### Installation + +#### Install prerequisites +**Step 1.** Download and install Miniconda from the [official website](https://mmpretrain.readthedocs.io/en/latest/get_started.html). + +**Step 2.** Create a conda virtual environment and activate it + +```bash +conda create --name train-centerpoint python=3.8 -y +conda activate train-centerpoint +``` + +**Step 3.** Install PyTorch + +Please ensure you have PyTorch installed, compatible with CUDA 11.6, as it is a requirement for current Autoware. + +```bash +conda install pytorch==1.13.1 torchvision==0.14.1 pytorch-cuda=11.6 -c pytorch -c nvidia +``` +#### Install mmdetection3d +**Step 1.** Install MMEngine, MMCV and MMDetection using MIM + +```bash +pip install -U openmim +mim install mmengine +mim install 'mmcv>=2.0.0rc4' +mim install 'mmdet>=3.0.0' +``` + +**Step 2.** Install mmdetection3d forked repository + +Introduced several valuable enhancements in our fork of the mmdetection3d repository. +Notably, we've made the PointPillar z voxel feature input optional to maintain compatibility with the original paper. +In addition, we've integrated a PyTorch to ONNX converter and a Tier4 Dataset format reader for added functionality. + + +```bash +git clone https://github.com/autowarefoundation/mmdetection3d.git -b dev-1.x-autoware +cd mmdetection3d +pip install -v -e . +``` + +### Preparing NuScenes dataset for training + +**Step 1.** Download the NuScenes dataset from the [official website](https://www.nuscenes.org/download) and extract the dataset to a folder of your choice. + +**Step 2.** Create a symbolic link to the dataset folder + +```bash +ln -s /path/to/nuscenes/dataset/ /path/to/mmdetection3d/data/nuscenes/ +``` + +**Step 3.** Prepare the NuScenes data by running: + +```bash +cd mmdetection3d +python tools/create_data.py nuscenes --root-path ./data/nuscenes --out-dir ./data/nuscenes --extra-tag nuscenes +``` + +### Training CenterPoint with NuScenes Dataset + +#### Prepare the config file + +The configuration file that illustrates how to train the CenterPoint model with the NuScenes dataset is +located at mmdetection3d/configs/centerpoint/centerpoint_custom.py. This configuration file is a derived version of the +centerpoint_pillar02_second_secfpn_head-circlenms_8xb4-cyclic-20e_nus-3d.py configuration file from mmdetection3D. +In this custom configuration, the **use_voxel_center_z parameter** is set to **False** to deactivate the z coordinate of the voxel center, +aligning with the original paper's specifications and making the model compatible with Autoware. Additionally, the filter size is set as **[32, 32]**. + +The CenterPoint model can be tailored to your specific requirements by modifying various parameters within the configuration file. +This includes adjustments related to preprocessing operations, training, testing, model architecture, dataset, optimizer, learning rate scheduler, and more. + +#### Start training + +```bash +python tools/train.py configs/centerpoint/centerpoint_custom.py --work-dir ./work_dirs/centerpoint_custom +``` + +#### Evaluation of the trained model + +For evaluation purposes, we have included a sample dataset captured from vehicle which consists of the following LiDAR sensors: +1 x Velodyne VLS128, 4 x Velodyne VLP16, and 1 x Robosense RS Bpearl. This dataset comprises 600 LiDAR frames and encompasses 5 distinct classes, 6905 cars, 3951 pedestrians, +75 cyclists, 162 buses, and 326 trucks 3D annotation. In the sample dataset, frames annotatated as a 2 frame, each second. You can employ this dataset for a wide range of purposes, +including training, evaluation, and fine-tuning of models. It is organized in the Tier4Dataset format. + +##### Download the sample dataset +```bash +TODO(kaancolak): add the link to the sample dataset + +#Extract the dataset to a folder of your choice + +#Create a symbolic link to the dataset folder +ln -s /PATH/TO/DATASET/ /PATH/TO/mmdetection3d/data/tier4_dataset/ +``` + +##### Prepare dataset and evaluate trained model + +Create .pkl files for the purposes of training, evaluation, and testing. + +```bash + +python tools/create_data.py Tier4Dataset --root-path data/sample_dataset/ --out-dir data/sample_dataset/ --extra-tag Tier4Dataset --version sample_dataset --annotation-hz 2 +``` + +Run evaluation + +```bash +python tools/test.py ./configs/centerpoint/test-centerpoint.py /PATH/OF/THE/CHECKPOINT --task lidar_det +``` + +Evaluation result could be relatively low because of the e to variations in sensor modalities between the sample dataset +and the training dataset. The model's training parameters are originally tailored to the NuScenes dataset, which employs a single lidar +sensor positioned atop the vehicle. In contrast, the provided sample dataset comprises concatenated point clouds positioned at +the base link location of the vehicle. + +### Deploying CenterPoint model to Autoware + +#### Convert CenterPoint PyTorch model to ONNX Format +The lidar_centerpoint implementation requires two ONNX models as input the voxel encoder and the backbone-neck-head of the CenterPoint model, other aspects of the network, +such as preprocessing operations, are implemented externally. Under the fork of the mmdetection3d repository, +we have included a script that converts the CenterPoint model to Autoware compitible ONNX format. +You can find it in `mmdetection3d/tools/centerpoint_onnx_converter.py` file. + + +```bash +python tools/centerpoint_onnx_converter.py --cfg configs/centerpoint/centerpoint_custom.py --ckpt work_dirs/centerpoint_custom/YOUR_BEST_MODEL.pth -work-dir ./work_dirs/onnx_models +``` + +#### Create the config file for custom model + +Create a new config file named **centerpoint_custom.param.yaml** under the config file directory of the lidar_centerpoint node. Sets the parameters of the config file like +point_cloud_range, point_feature_size, voxel_size, etc. according to the training config file. + +```yaml +/**: + ros__parameters: + class_names: ["CAR", "TRUCK", "BUS", "BICYCLE", "PEDESTRIAN"] + point_feature_size: 4 + max_voxel_size: 40000 + point_cloud_range: [-51.2, -51.2, -3.0, 51.2, 51.2, 5.0,] + voxel_size: [0.2, 0.2, 8.0] + downsample_factor: 1 + encoder_in_feature_size: 9 + # post-process params + circle_nms_dist_threshold: 0.5 + iou_nms_target_class_names: ["CAR"] + iou_nms_search_distance_2d: 10.0 + iou_nms_threshold: 0.1 + yaw_norm_thresholds: [0.3, 0.3, 0.3, 0.3, 0.0] +``` + +#### Launch the lidar_centerpoint node + +```bash +cd /YOUR/AUTOWARE/PATH/Autoware +source install/setup.bash +ros2 launch lidar_centerpoint lidar_centerpoint.launch.xml model_name:=centerpoint_custom model_path:=/PATH/TO/ONNX/FILE/ +``` + + ### Changelog #### v1 (2022/07/06) @@ -142,3 +309,9 @@ Example: [v1-head-centerpoint]: https://awf.ml.dev.web.auto/perception/models/centerpoint/v1/pts_backbone_neck_head_centerpoint.onnx [v1-encoder-centerpoint-tiny]: https://awf.ml.dev.web.auto/perception/models/centerpoint/v1/pts_voxel_encoder_centerpoint_tiny.onnx [v1-head-centerpoint-tiny]: https://awf.ml.dev.web.auto/perception/models/centerpoint/v1/pts_backbone_neck_head_centerpoint_tiny.onnx + +## Legal Notice +*The nuScenes dataset is released publicly for non-commercial use under the Creative +Commons Attribution-NonCommercial-ShareAlike 4.0 International Public License. +Additional Terms of Use can be found at https://www.nuscenes.org/terms-of-use. +To inquire about a commercial license please contact nuscenes@motional.com.* From 20ef77c2f464424073fe2b912142d3b1812dfb26 Mon Sep 17 00:00:00 2001 From: "pre-commit-ci[bot]" <66853113+pre-commit-ci[bot]@users.noreply.github.com> Date: Mon, 13 Nov 2023 21:06:48 +0000 Subject: [PATCH 02/12] style(pre-commit): autofix --- perception/lidar_centerpoint/README.md | 45 ++++++++++++++------------ 1 file changed, 24 insertions(+), 21 deletions(-) diff --git a/perception/lidar_centerpoint/README.md b/perception/lidar_centerpoint/README.md index 2402058732580..2ae4335549092 100644 --- a/perception/lidar_centerpoint/README.md +++ b/perception/lidar_centerpoint/README.md @@ -71,13 +71,14 @@ You can download the onnx format of trained models by clicking on the links belo ## Training CenterPoint Model and Deploying to the Autoware ### Overview -This guide provides instructions on training a CenterPoint model using the **mmdetection3d** repository -and seamlessly deploying it within the Autoware. +This guide provides instructions on training a CenterPoint model using the **mmdetection3d** repository +and seamlessly deploying it within the Autoware. ### Installation #### Install prerequisites + **Step 1.** Download and install Miniconda from the [official website](https://mmpretrain.readthedocs.io/en/latest/get_started.html). **Step 2.** Create a conda virtual environment and activate it @@ -94,7 +95,9 @@ Please ensure you have PyTorch installed, compatible with CUDA 11.6, as it is a ```bash conda install pytorch==1.13.1 torchvision==0.14.1 pytorch-cuda=11.6 -c pytorch -c nvidia ``` + #### Install mmdetection3d + **Step 1.** Install MMEngine, MMCV and MMDetection using MIM ```bash @@ -107,10 +110,9 @@ mim install 'mmdet>=3.0.0' **Step 2.** Install mmdetection3d forked repository Introduced several valuable enhancements in our fork of the mmdetection3d repository. -Notably, we've made the PointPillar z voxel feature input optional to maintain compatibility with the original paper. +Notably, we've made the PointPillar z voxel feature input optional to maintain compatibility with the original paper. In addition, we've integrated a PyTorch to ONNX converter and a Tier4 Dataset format reader for added functionality. - ```bash git clone https://github.com/autowarefoundation/mmdetection3d.git -b dev-1.x-autoware cd mmdetection3d @@ -138,8 +140,8 @@ python tools/create_data.py nuscenes --root-path ./data/nuscenes --out-dir ./dat #### Prepare the config file -The configuration file that illustrates how to train the CenterPoint model with the NuScenes dataset is -located at mmdetection3d/configs/centerpoint/centerpoint_custom.py. This configuration file is a derived version of the +The configuration file that illustrates how to train the CenterPoint model with the NuScenes dataset is +located at mmdetection3d/configs/centerpoint/centerpoint_custom.py. This configuration file is a derived version of the centerpoint_pillar02_second_secfpn_head-circlenms_8xb4-cyclic-20e_nus-3d.py configuration file from mmdetection3D. In this custom configuration, the **use_voxel_center_z parameter** is set to **False** to deactivate the z coordinate of the voxel center, aligning with the original paper's specifications and making the model compatible with Autoware. Additionally, the filter size is set as **[32, 32]**. @@ -155,13 +157,14 @@ python tools/train.py configs/centerpoint/centerpoint_custom.py --work-dir ./wor #### Evaluation of the trained model -For evaluation purposes, we have included a sample dataset captured from vehicle which consists of the following LiDAR sensors: +For evaluation purposes, we have included a sample dataset captured from vehicle which consists of the following LiDAR sensors: 1 x Velodyne VLS128, 4 x Velodyne VLP16, and 1 x Robosense RS Bpearl. This dataset comprises 600 LiDAR frames and encompasses 5 distinct classes, 6905 cars, 3951 pedestrians, 75 cyclists, 162 buses, and 326 trucks 3D annotation. In the sample dataset, frames annotatated as a 2 frame, each second. You can employ this dataset for a wide range of purposes, -including training, evaluation, and fine-tuning of models. It is organized in the Tier4Dataset format. +including training, evaluation, and fine-tuning of models. It is organized in the Tier4Dataset format. ##### Download the sample dataset -```bash + +```bash TODO(kaancolak): add the link to the sample dataset #Extract the dataset to a folder of your choice @@ -186,27 +189,27 @@ python tools/test.py ./configs/centerpoint/test-centerpoint.py /PATH/OF/THE/CHEC ``` Evaluation result could be relatively low because of the e to variations in sensor modalities between the sample dataset -and the training dataset. The model's training parameters are originally tailored to the NuScenes dataset, which employs a single lidar +and the training dataset. The model's training parameters are originally tailored to the NuScenes dataset, which employs a single lidar sensor positioned atop the vehicle. In contrast, the provided sample dataset comprises concatenated point clouds positioned at the base link location of the vehicle. ### Deploying CenterPoint model to Autoware #### Convert CenterPoint PyTorch model to ONNX Format + The lidar_centerpoint implementation requires two ONNX models as input the voxel encoder and the backbone-neck-head of the CenterPoint model, other aspects of the network, -such as preprocessing operations, are implemented externally. Under the fork of the mmdetection3d repository, -we have included a script that converts the CenterPoint model to Autoware compitible ONNX format. +such as preprocessing operations, are implemented externally. Under the fork of the mmdetection3d repository, +we have included a script that converts the CenterPoint model to Autoware compitible ONNX format. You can find it in `mmdetection3d/tools/centerpoint_onnx_converter.py` file. - ```bash python tools/centerpoint_onnx_converter.py --cfg configs/centerpoint/centerpoint_custom.py --ckpt work_dirs/centerpoint_custom/YOUR_BEST_MODEL.pth -work-dir ./work_dirs/onnx_models ``` -#### Create the config file for custom model +#### Create the config file for custom model Create a new config file named **centerpoint_custom.param.yaml** under the config file directory of the lidar_centerpoint node. Sets the parameters of the config file like -point_cloud_range, point_feature_size, voxel_size, etc. according to the training config file. +point_cloud_range, point_feature_size, voxel_size, etc. according to the training config file. ```yaml /**: @@ -214,7 +217,7 @@ point_cloud_range, point_feature_size, voxel_size, etc. according to the trainin class_names: ["CAR", "TRUCK", "BUS", "BICYCLE", "PEDESTRIAN"] point_feature_size: 4 max_voxel_size: 40000 - point_cloud_range: [-51.2, -51.2, -3.0, 51.2, 51.2, 5.0,] + point_cloud_range: [-51.2, -51.2, -3.0, 51.2, 51.2, 5.0] voxel_size: [0.2, 0.2, 8.0] downsample_factor: 1 encoder_in_feature_size: 9 @@ -234,7 +237,6 @@ source install/setup.bash ros2 launch lidar_centerpoint lidar_centerpoint.launch.xml model_name:=centerpoint_custom model_path:=/PATH/TO/ONNX/FILE/ ``` - ### Changelog #### v1 (2022/07/06) @@ -311,7 +313,8 @@ Example: [v1-head-centerpoint-tiny]: https://awf.ml.dev.web.auto/perception/models/centerpoint/v1/pts_backbone_neck_head_centerpoint_tiny.onnx ## Legal Notice -*The nuScenes dataset is released publicly for non-commercial use under the Creative -Commons Attribution-NonCommercial-ShareAlike 4.0 International Public License. -Additional Terms of Use can be found at https://www.nuscenes.org/terms-of-use. -To inquire about a commercial license please contact nuscenes@motional.com.* + +_The nuScenes dataset is released publicly for non-commercial use under the Creative +Commons Attribution-NonCommercial-ShareAlike 4.0 International Public License. +Additional Terms of Use can be found at . +To inquire about a commercial license please contact nuscenes@motional.com._ From 48afb5c03b0564f7b22497ed4fcc7fbab1fd3be0 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Kaan=20=C3=87olak?= Date: Thu, 7 Dec 2023 15:38:12 +0900 Subject: [PATCH 03/12] refactor(lidar_centerpoint): and link and small fix MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Signed-off-by: Kaan Çolak --- perception/lidar_centerpoint/README.md | 24 ++++++++++++------------ 1 file changed, 12 insertions(+), 12 deletions(-) diff --git a/perception/lidar_centerpoint/README.md b/perception/lidar_centerpoint/README.md index 2ae4335549092..902bef0abdeac 100644 --- a/perception/lidar_centerpoint/README.md +++ b/perception/lidar_centerpoint/README.md @@ -73,7 +73,7 @@ You can download the onnx format of trained models by clicking on the links belo ### Overview This guide provides instructions on training a CenterPoint model using the **mmdetection3d** repository -and seamlessly deploying it within the Autoware. +and seamlessly deploying it within Autoware. ### Installation @@ -90,7 +90,7 @@ conda activate train-centerpoint **Step 3.** Install PyTorch -Please ensure you have PyTorch installed, compatible with CUDA 11.6, as it is a requirement for current Autoware. +Please ensure you have PyTorch installed, and compatible with CUDA 11.6, as it is a requirement for current Autoware. ```bash conda install pytorch==1.13.1 torchvision==0.14.1 pytorch-cuda=11.6 -c pytorch -c nvidia @@ -98,7 +98,7 @@ conda install pytorch==1.13.1 torchvision==0.14.1 pytorch-cuda=11.6 -c pytorch - #### Install mmdetection3d -**Step 1.** Install MMEngine, MMCV and MMDetection using MIM +**Step 1.** Install MMEngine, MMCV, and MMDetection using MIM ```bash pip install -U openmim @@ -114,7 +114,7 @@ Notably, we've made the PointPillar z voxel feature input optional to maintain c In addition, we've integrated a PyTorch to ONNX converter and a Tier4 Dataset format reader for added functionality. ```bash -git clone https://github.com/autowarefoundation/mmdetection3d.git -b dev-1.x-autoware +git clone https://github.com/autowarefoundation/mmdetection3d.git cd mmdetection3d pip install -v -e . ``` @@ -157,25 +157,25 @@ python tools/train.py configs/centerpoint/centerpoint_custom.py --work-dir ./wor #### Evaluation of the trained model -For evaluation purposes, we have included a sample dataset captured from vehicle which consists of the following LiDAR sensors: +For evaluation purposes, we have included a sample dataset captured from the vehicle which consists of the following LiDAR sensors: 1 x Velodyne VLS128, 4 x Velodyne VLP16, and 1 x Robosense RS Bpearl. This dataset comprises 600 LiDAR frames and encompasses 5 distinct classes, 6905 cars, 3951 pedestrians, -75 cyclists, 162 buses, and 326 trucks 3D annotation. In the sample dataset, frames annotatated as a 2 frame, each second. You can employ this dataset for a wide range of purposes, +75 cyclists, 162 buses, and 326 trucks 3D annotation. In the sample dataset, frames are annotated as 2 frames for each second. You can employ this dataset for a wide range of purposes, including training, evaluation, and fine-tuning of models. It is organized in the Tier4Dataset format. ##### Download the sample dataset ```bash -TODO(kaancolak): add the link to the sample dataset +wget https://autoware-files.s3.us-west-2.amazonaws.com/dataset/lidar_detection_sample_dataset.tar.gz #Extract the dataset to a folder of your choice - +tar -xvf lidar_detection_sample_dataset.tar.gz #Create a symbolic link to the dataset folder ln -s /PATH/TO/DATASET/ /PATH/TO/mmdetection3d/data/tier4_dataset/ ``` ##### Prepare dataset and evaluate trained model -Create .pkl files for the purposes of training, evaluation, and testing. +Create .pkl files for training, evaluation, and testing. ```bash @@ -188,7 +188,7 @@ Run evaluation python tools/test.py ./configs/centerpoint/test-centerpoint.py /PATH/OF/THE/CHECKPOINT --task lidar_det ``` -Evaluation result could be relatively low because of the e to variations in sensor modalities between the sample dataset +Evaluation results could be relatively low because of the e to variations in sensor modalities between the sample dataset and the training dataset. The model's training parameters are originally tailored to the NuScenes dataset, which employs a single lidar sensor positioned atop the vehicle. In contrast, the provided sample dataset comprises concatenated point clouds positioned at the base link location of the vehicle. @@ -199,14 +199,14 @@ the base link location of the vehicle. The lidar_centerpoint implementation requires two ONNX models as input the voxel encoder and the backbone-neck-head of the CenterPoint model, other aspects of the network, such as preprocessing operations, are implemented externally. Under the fork of the mmdetection3d repository, -we have included a script that converts the CenterPoint model to Autoware compitible ONNX format. +we have included a script that converts the CenterPoint model to Autoware compatible ONNX format. You can find it in `mmdetection3d/tools/centerpoint_onnx_converter.py` file. ```bash python tools/centerpoint_onnx_converter.py --cfg configs/centerpoint/centerpoint_custom.py --ckpt work_dirs/centerpoint_custom/YOUR_BEST_MODEL.pth -work-dir ./work_dirs/onnx_models ``` -#### Create the config file for custom model +#### Create the config file for the custom model Create a new config file named **centerpoint_custom.param.yaml** under the config file directory of the lidar_centerpoint node. Sets the parameters of the config file like point_cloud_range, point_feature_size, voxel_size, etc. according to the training config file. From 06ab87ba7865f56545965bd6718934f3e3210579 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Kaan=20=C3=87olak?= Date: Mon, 22 Jan 2024 18:26:15 +0900 Subject: [PATCH 04/12] refactor(lidar_centerpoint): update docs. MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Signed-off-by: Kaan Çolak --- perception/lidar_centerpoint/README.md | 12 ++++++------ 1 file changed, 6 insertions(+), 6 deletions(-) diff --git a/perception/lidar_centerpoint/README.md b/perception/lidar_centerpoint/README.md index 902bef0abdeac..f9803ce0af208 100644 --- a/perception/lidar_centerpoint/README.md +++ b/perception/lidar_centerpoint/README.md @@ -104,7 +104,7 @@ conda install pytorch==1.13.1 torchvision==0.14.1 pytorch-cuda=11.6 -c pytorch - pip install -U openmim mim install mmengine mim install 'mmcv>=2.0.0rc4' -mim install 'mmdet>=3.0.0' +mim install 'mmdet>=3.0.0rc5, <3.3.0' ``` **Step 2.** Install mmdetection3d forked repository @@ -141,7 +141,7 @@ python tools/create_data.py nuscenes --root-path ./data/nuscenes --out-dir ./dat #### Prepare the config file The configuration file that illustrates how to train the CenterPoint model with the NuScenes dataset is -located at mmdetection3d/configs/centerpoint/centerpoint_custom.py. This configuration file is a derived version of the +located at mmdetection3d/projects/AutowareCenterPoint/configs. This configuration file is a derived version of the centerpoint_pillar02_second_secfpn_head-circlenms_8xb4-cyclic-20e_nus-3d.py configuration file from mmdetection3D. In this custom configuration, the **use_voxel_center_z parameter** is set to **False** to deactivate the z coordinate of the voxel center, aligning with the original paper's specifications and making the model compatible with Autoware. Additionally, the filter size is set as **[32, 32]**. @@ -152,7 +152,7 @@ This includes adjustments related to preprocessing operations, training, testing #### Start training ```bash -python tools/train.py configs/centerpoint/centerpoint_custom.py --work-dir ./work_dirs/centerpoint_custom +python tools/train.py projects/AutowareCenterPoint/configs/centerpoint_custom.py --work-dir ./work_dirs/centerpoint_custom ``` #### Evaluation of the trained model @@ -185,7 +185,7 @@ python tools/create_data.py Tier4Dataset --root-path data/sample_dataset/ --out- Run evaluation ```bash -python tools/test.py ./configs/centerpoint/test-centerpoint.py /PATH/OF/THE/CHECKPOINT --task lidar_det +python tools/test.py projects/AutowareCenterPoint/configs/centerpoint_custom_test.py /PATH/OF/THE/CHECKPOINT --task lidar_det ``` Evaluation results could be relatively low because of the e to variations in sensor modalities between the sample dataset @@ -200,10 +200,10 @@ the base link location of the vehicle. The lidar_centerpoint implementation requires two ONNX models as input the voxel encoder and the backbone-neck-head of the CenterPoint model, other aspects of the network, such as preprocessing operations, are implemented externally. Under the fork of the mmdetection3d repository, we have included a script that converts the CenterPoint model to Autoware compatible ONNX format. -You can find it in `mmdetection3d/tools/centerpoint_onnx_converter.py` file. +You can find it in `mmdetection3d/projects/AutowareCenterPoint` file. ```bash -python tools/centerpoint_onnx_converter.py --cfg configs/centerpoint/centerpoint_custom.py --ckpt work_dirs/centerpoint_custom/YOUR_BEST_MODEL.pth -work-dir ./work_dirs/onnx_models +python projects/AutowareCenterPoint/centerpoint_onnx_converter.py --cfg projects/AutowareCenterPoint/configs/centerpoint_custom.py --ckpt work_dirs/centerpoint_custom/YOUR_BEST_MODEL.pth --work-dir ./work_dirs/onnx_models ``` #### Create the config file for the custom model From b7d671a305ad04d50e46b68b08f2e0c9492580f5 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Kaan=20=C3=87olak?= Date: Tue, 27 Feb 2024 18:50:53 +0300 Subject: [PATCH 05/12] fix(lidar_centerpoint): change dataset name MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Signed-off-by: Kaan Çolak --- perception/lidar_centerpoint/README.md | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/perception/lidar_centerpoint/README.md b/perception/lidar_centerpoint/README.md index f9803ce0af208..e51c43c626991 100644 --- a/perception/lidar_centerpoint/README.md +++ b/perception/lidar_centerpoint/README.md @@ -111,7 +111,7 @@ mim install 'mmdet>=3.0.0rc5, <3.3.0' Introduced several valuable enhancements in our fork of the mmdetection3d repository. Notably, we've made the PointPillar z voxel feature input optional to maintain compatibility with the original paper. -In addition, we've integrated a PyTorch to ONNX converter and a Tier4 Dataset format reader for added functionality. +In addition, we've integrated a PyTorch to ONNX converter and a T4 format reader for added functionality. ```bash git clone https://github.com/autowarefoundation/mmdetection3d.git @@ -160,7 +160,7 @@ python tools/train.py projects/AutowareCenterPoint/configs/centerpoint_custom.py For evaluation purposes, we have included a sample dataset captured from the vehicle which consists of the following LiDAR sensors: 1 x Velodyne VLS128, 4 x Velodyne VLP16, and 1 x Robosense RS Bpearl. This dataset comprises 600 LiDAR frames and encompasses 5 distinct classes, 6905 cars, 3951 pedestrians, 75 cyclists, 162 buses, and 326 trucks 3D annotation. In the sample dataset, frames are annotated as 2 frames for each second. You can employ this dataset for a wide range of purposes, -including training, evaluation, and fine-tuning of models. It is organized in the Tier4Dataset format. +including training, evaluation, and fine-tuning of models. It is organized in the T4 format. ##### Download the sample dataset @@ -179,7 +179,7 @@ Create .pkl files for training, evaluation, and testing. ```bash -python tools/create_data.py Tier4Dataset --root-path data/sample_dataset/ --out-dir data/sample_dataset/ --extra-tag Tier4Dataset --version sample_dataset --annotation-hz 2 +python tools/create_data.py T4Dataset --root-path data/sample_dataset/ --out-dir data/sample_dataset/ --extra-tag T4Dataset --version sample_dataset --annotation-hz 2 ``` Run evaluation From 808a7c7cd7d2a604535820f578db4d512f57da6d Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Kaan=20=C3=87olak?= Date: Wed, 28 Feb 2024 11:57:05 +0300 Subject: [PATCH 06/12] fix(lidar_centerpoint): add docker instruction MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Signed-off-by: Kaan Çolak --- perception/lidar_centerpoint/README.md | 19 +++++++++++++++++++ 1 file changed, 19 insertions(+) diff --git a/perception/lidar_centerpoint/README.md b/perception/lidar_centerpoint/README.md index e51c43c626991..2fd56c6011d02 100644 --- a/perception/lidar_centerpoint/README.md +++ b/perception/lidar_centerpoint/README.md @@ -119,6 +119,25 @@ cd mmdetection3d pip install -v -e . ``` +#### Use Training Repository with Docker +Alternatively, you can use Docker to run the mmdetection3d repository.We provide a Dockerfile to build a Docker image with the mmdetection3d repository and its dependencies. + +Clone fork of the mmdetection3d repository +```bash +git clone https://github.com/autowarefoundation/mmdetection3d.git +``` + +Build the Docker image by running the following command +```bash +cd mmdetection3d +docker build -t mmdetection3d -f docker/Dockerfile . +``` + +Run the Docker container +```bash +docker run --gpus all --shm-size=8g -it -v {DATA_DIR}:/mmdetection3d/data mmdetection3d +``` + ### Preparing NuScenes dataset for training **Step 1.** Download the NuScenes dataset from the [official website](https://www.nuscenes.org/download) and extract the dataset to a folder of your choice. From 3345fd8fae61cc9baf532937c428ac01589c09c1 Mon Sep 17 00:00:00 2001 From: "pre-commit-ci[bot]" <66853113+pre-commit-ci[bot]@users.noreply.github.com> Date: Wed, 28 Feb 2024 08:59:04 +0000 Subject: [PATCH 07/12] style(pre-commit): autofix --- perception/lidar_centerpoint/README.md | 4 ++++ 1 file changed, 4 insertions(+) diff --git a/perception/lidar_centerpoint/README.md b/perception/lidar_centerpoint/README.md index 2fd56c6011d02..c09d6ad5c721e 100644 --- a/perception/lidar_centerpoint/README.md +++ b/perception/lidar_centerpoint/README.md @@ -120,20 +120,24 @@ pip install -v -e . ``` #### Use Training Repository with Docker + Alternatively, you can use Docker to run the mmdetection3d repository.We provide a Dockerfile to build a Docker image with the mmdetection3d repository and its dependencies. Clone fork of the mmdetection3d repository + ```bash git clone https://github.com/autowarefoundation/mmdetection3d.git ``` Build the Docker image by running the following command + ```bash cd mmdetection3d docker build -t mmdetection3d -f docker/Dockerfile . ``` Run the Docker container + ```bash docker run --gpus all --shm-size=8g -it -v {DATA_DIR}:/mmdetection3d/data mmdetection3d ``` From 2f780bd65a6f31d9cd8f89124dd109edba7124ef Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Kaan=20=C3=87olak?= Date: Tue, 26 Mar 2024 15:30:27 +0300 Subject: [PATCH 08/12] fix(lidar_centerpoint): fix spell MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Signed-off-by: Kaan Çolak --- perception/lidar_centerpoint/README.md | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/perception/lidar_centerpoint/README.md b/perception/lidar_centerpoint/README.md index c09d6ad5c721e..692bf27d8059e 100644 --- a/perception/lidar_centerpoint/README.md +++ b/perception/lidar_centerpoint/README.md @@ -146,6 +146,8 @@ docker run --gpus all --shm-size=8g -it -v {DATA_DIR}:/mmdetection3d/data mmdete **Step 1.** Download the NuScenes dataset from the [official website](https://www.nuscenes.org/download) and extract the dataset to a folder of your choice. +**Note:** The NuScenes dataset is large and requires significant disk space. Ensure you have enough storage available before proceeding. + **Step 2.** Create a symbolic link to the dataset folder ```bash @@ -165,7 +167,7 @@ python tools/create_data.py nuscenes --root-path ./data/nuscenes --out-dir ./dat The configuration file that illustrates how to train the CenterPoint model with the NuScenes dataset is located at mmdetection3d/projects/AutowareCenterPoint/configs. This configuration file is a derived version of the -centerpoint_pillar02_second_secfpn_head-circlenms_8xb4-cyclic-20e_nus-3d.py configuration file from mmdetection3D. +`centerpoint_pillar02_second_secfpn_head-circlenms_8xb4-cyclic-20e_nus-3d.py` configuration file from mmdetection3D. In this custom configuration, the **use_voxel_center_z parameter** is set to **False** to deactivate the z coordinate of the voxel center, aligning with the original paper's specifications and making the model compatible with Autoware. Additionally, the filter size is set as **[32, 32]**. From ef15632554a4f58b5b479f56fb599d86c0eb3e08 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?M=2E=20Fatih=20C=C4=B1r=C4=B1t?= Date: Tue, 26 Mar 2024 15:50:59 +0300 Subject: [PATCH 09/12] small fixes and spellcheck MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Signed-off-by: M. Fatih Cırıt --- perception/lidar_centerpoint/README.md | 19 +++++++++---------- 1 file changed, 9 insertions(+), 10 deletions(-) diff --git a/perception/lidar_centerpoint/README.md b/perception/lidar_centerpoint/README.md index 692bf27d8059e..ff6bc4c0c0abb 100644 --- a/perception/lidar_centerpoint/README.md +++ b/perception/lidar_centerpoint/README.md @@ -62,7 +62,7 @@ ros2 launch lidar_centerpoint lidar_centerpoint.launch.xml model_name:=centerpoi You can download the onnx format of trained models by clicking on the links below. -- Centerpoint : [pts_voxel_encoder_centerpoint.onnx](https://awf.ml.dev.web.auto/perception/models/centerpoint/v2/pts_voxel_encoder_centerpoint.onnx), [pts_backbone_neck_head_centerpoint.onnx](https://awf.ml.dev.web.auto/perception/models/centerpoint/v2/pts_backbone_neck_head_centerpoint.onnx) +- Centerpoint: [pts_voxel_encoder_centerpoint.onnx](https://awf.ml.dev.web.auto/perception/models/centerpoint/v2/pts_voxel_encoder_centerpoint.onnx), [pts_backbone_neck_head_centerpoint.onnx](https://awf.ml.dev.web.auto/perception/models/centerpoint/v2/pts_backbone_neck_head_centerpoint.onnx) - Centerpoint tiny: [pts_voxel_encoder_centerpoint_tiny.onnx](https://awf.ml.dev.web.auto/perception/models/centerpoint/v2/pts_voxel_encoder_centerpoint_tiny.onnx), [pts_backbone_neck_head_centerpoint_tiny.onnx](https://awf.ml.dev.web.auto/perception/models/centerpoint/v2/pts_backbone_neck_head_centerpoint_tiny.onnx) `Centerpoint` was trained in `nuScenes` (~28k lidar frames) [8] and TIER IV's internal database (~11k lidar frames) for 60 epochs. @@ -121,7 +121,7 @@ pip install -v -e . #### Use Training Repository with Docker -Alternatively, you can use Docker to run the mmdetection3d repository.We provide a Dockerfile to build a Docker image with the mmdetection3d repository and its dependencies. +Alternatively, you can use Docker to run the mmdetection3d repository. We provide a Dockerfile to build a Docker image with the mmdetection3d repository and its dependencies. Clone fork of the mmdetection3d repository @@ -129,14 +129,14 @@ Clone fork of the mmdetection3d repository git clone https://github.com/autowarefoundation/mmdetection3d.git ``` -Build the Docker image by running the following command +Build the Docker image by running the following command: ```bash cd mmdetection3d docker build -t mmdetection3d -f docker/Dockerfile . ``` -Run the Docker container +Run the Docker container: ```bash docker run --gpus all --shm-size=8g -it -v {DATA_DIR}:/mmdetection3d/data mmdetection3d @@ -166,9 +166,10 @@ python tools/create_data.py nuscenes --root-path ./data/nuscenes --out-dir ./dat #### Prepare the config file The configuration file that illustrates how to train the CenterPoint model with the NuScenes dataset is -located at mmdetection3d/projects/AutowareCenterPoint/configs. This configuration file is a derived version of the -`centerpoint_pillar02_second_secfpn_head-circlenms_8xb4-cyclic-20e_nus-3d.py` configuration file from mmdetection3D. -In this custom configuration, the **use_voxel_center_z parameter** is set to **False** to deactivate the z coordinate of the voxel center, +located at `mmdetection3d/projects/AutowareCenterPoint/configs`. This configuration file is a derived version of +[this centerpoint configuration file](https://github.com/autowarefoundation/mmdetection3d/blob/5c0613be29bd2e51771ec5e046d89ba3089887c7/configs/centerpoint/centerpoint_pillar02_second_secfpn_head-circlenms_8xb4-cyclic-20e_nus-3d.py) +from mmdetection3D. +In this custom configuration, the **use_voxel_center_z parameter** is set as **False** to deactivate the z coordinate of the voxel center, aligning with the original paper's specifications and making the model compatible with Autoware. Additionally, the filter size is set as **[32, 32]**. The CenterPoint model can be tailored to your specific requirements by modifying various parameters within the configuration file. @@ -190,7 +191,6 @@ including training, evaluation, and fine-tuning of models. It is organized in th ##### Download the sample dataset ```bash - wget https://autoware-files.s3.us-west-2.amazonaws.com/dataset/lidar_detection_sample_dataset.tar.gz #Extract the dataset to a folder of your choice tar -xvf lidar_detection_sample_dataset.tar.gz @@ -200,10 +200,9 @@ ln -s /PATH/TO/DATASET/ /PATH/TO/mmdetection3d/data/tier4_dataset/ ##### Prepare dataset and evaluate trained model -Create .pkl files for training, evaluation, and testing. +Create `.pkl` files for training, evaluation, and testing. ```bash - python tools/create_data.py T4Dataset --root-path data/sample_dataset/ --out-dir data/sample_dataset/ --extra-tag T4Dataset --version sample_dataset --annotation-hz 2 ``` From b3b609aee4fb9f258732d3e88f9cefa4d11925e3 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Kaan=20=C3=87olak?= Date: Fri, 24 May 2024 11:46:14 +0300 Subject: [PATCH 10/12] docs(lidar_centerpoint): add version --- perception/lidar_centerpoint/README.md | 2 ++ 1 file changed, 2 insertions(+) diff --git a/perception/lidar_centerpoint/README.md b/perception/lidar_centerpoint/README.md index 5435b14c54034..ee212f59638da 100644 --- a/perception/lidar_centerpoint/README.md +++ b/perception/lidar_centerpoint/README.md @@ -204,6 +204,8 @@ ln -s /PATH/TO/DATASET/ /PATH/TO/mmdetection3d/data/tier4_dataset/ Create `.pkl` files for training, evaluation, and testing. +The dataset was formatted according to T4Dataset specifications, with 'sample_dataset' designated as one of its versions. + ```bash python tools/create_data.py T4Dataset --root-path data/sample_dataset/ --out-dir data/sample_dataset/ --extra-tag T4Dataset --version sample_dataset --annotation-hz 2 ``` From e3f719491841675770d70742ffba9ecb63a15cb8 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Kaan=20=C3=87olak?= Date: Tue, 4 Jun 2024 11:07:26 +0300 Subject: [PATCH 11/12] Update README.md docs(lidar_centerpoint): add thanks --- perception/lidar_centerpoint/README.md | 4 ++++ 1 file changed, 4 insertions(+) diff --git a/perception/lidar_centerpoint/README.md b/perception/lidar_centerpoint/README.md index ee212f59638da..39893d05cf0ea 100644 --- a/perception/lidar_centerpoint/README.md +++ b/perception/lidar_centerpoint/README.md @@ -340,6 +340,10 @@ Example: [v1-encoder-centerpoint-tiny]: https://awf.ml.dev.web.auto/perception/models/centerpoint/v1/pts_voxel_encoder_centerpoint_tiny.onnx [v1-head-centerpoint-tiny]: https://awf.ml.dev.web.auto/perception/models/centerpoint/v1/pts_backbone_neck_head_centerpoint_tiny.onnx +## Acknowledgment: deepen.ai's 3D Annotation Tools Contribution + +Special thanks to deepen.ai for providing their 3D Annotation tools, which have been instrumental in creating our sample dataset. + ## Legal Notice _The nuScenes dataset is released publicly for non-commercial use under the Creative From 26f028ca04f246b3f58a5c105de5adba67c30e30 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Kaan=20=C3=87olak?= Date: Tue, 4 Jun 2024 11:10:09 +0300 Subject: [PATCH 12/12] docs(lidar_centerpoint): add link --- perception/lidar_centerpoint/README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/perception/lidar_centerpoint/README.md b/perception/lidar_centerpoint/README.md index 39893d05cf0ea..ff672c79bd253 100644 --- a/perception/lidar_centerpoint/README.md +++ b/perception/lidar_centerpoint/README.md @@ -342,7 +342,7 @@ Example: ## Acknowledgment: deepen.ai's 3D Annotation Tools Contribution -Special thanks to deepen.ai for providing their 3D Annotation tools, which have been instrumental in creating our sample dataset. +Special thanks to [Deepen AI](https://www.deepen.ai/) for providing their 3D Annotation tools, which have been instrumental in creating our sample dataset. ## Legal Notice