Skip to content

Commit

Permalink
refactor(lidar_centerpoint): add training docs
Browse files Browse the repository at this point in the history
Signed-off-by: Kaan Çolak <[email protected]>
  • Loading branch information
kaancolak committed Nov 13, 2023
1 parent 6e60821 commit 2abbfe1
Showing 1 changed file with 173 additions and 0 deletions.
173 changes: 173 additions & 0 deletions perception/lidar_centerpoint/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -71,6 +71,173 @@ ros2 launch lidar_centerpoint single_inference_lidar_centerpoint.launch.xml pcd_
`lidar_centerpoint` generates a `ply` file in the provided `detections_path`, which contains the detections as triangle meshes.
These detections can be visualized by most 3D tools, but we also integrate a visualization UI using `Open3D` which is launched alongside `lidar_centerpoint`.

## Training CenterPoint Model and Deploying to the Autoware

### Overview
This guide provides instructions on training a CenterPoint model using the **mmdetection3d** repository
and seamlessly deploying it within the Autoware.


### Installation

#### Install prerequisites
**Step 1.** Download and install Miniconda from the [official website](https://mmpretrain.readthedocs.io/en/latest/get_started.html).

Check warning on line 84 in perception/lidar_centerpoint/README.md

View workflow job for this annotation

GitHub Actions / spell-check-differential

Unknown word (Miniconda)

**Step 2.** Create a conda virtual environment and activate it

```bash
conda create --name train-centerpoint python=3.8 -y
conda activate train-centerpoint
```

**Step 3.** Install PyTorch

Please ensure you have PyTorch installed, compatible with CUDA 11.6, as it is a requirement for current Autoware.

```bash
conda install pytorch==1.13.1 torchvision==0.14.1 pytorch-cuda=11.6 -c pytorch -c nvidia
```
#### Install mmdetection3d
**Step 1.** Install MMEngine, MMCV and MMDetection using MIM

```bash
pip install -U openmim
mim install mmengine
mim install 'mmcv>=2.0.0rc4'
mim install 'mmdet>=3.0.0'
```

**Step 2.** Install mmdetection3d forked repository

Introduced several valuable enhancements in our fork of the mmdetection3d repository.
Notably, we've made the PointPillar z voxel feature input optional to maintain compatibility with the original paper.
In addition, we've integrated a PyTorch to ONNX converter and a Tier4 Dataset format reader for added functionality.

Check warning on line 114 in perception/lidar_centerpoint/README.md

View workflow job for this annotation

GitHub Actions / spell-check-differential

Unknown word (Tier4)


```bash
git clone https://github.com/autowarefoundation/mmdetection3d.git -b dev-1.x-autoware
cd mmdetection3d
pip install -v -e .
```

### Preparing NuScenes dataset for training

**Step 1.** Download the NuScenes dataset from the [official website](https://www.nuscenes.org/download) and extract the dataset to a folder of your choice.

**Step 2.** Create a symbolic link to the dataset folder

```bash
ln -s /path/to/nuscenes/dataset/ /path/to/mmdetection3d/data/nuscenes/
```

**Step 3.** Prepare the NuScenes data by running:

```bash
cd mmdetection3d
python tools/create_data.py nuscenes --root-path ./data/nuscenes --out-dir ./data/nuscenes --extra-tag nuscenes
```

### Training CenterPoint with NuScenes Dataset

#### Prepare the config file

The configuration file that illustrates how to train the CenterPoint model with the NuScenes dataset is
located at mmdetection3d/configs/centerpoint/centerpoint_custom.py. This configuration file is a derived version of the
centerpoint_pillar02_second_secfpn_head-circlenms_8xb4-cyclic-20e_nus-3d.py configuration file from mmdetection3D.

Check warning on line 146 in perception/lidar_centerpoint/README.md

View workflow job for this annotation

GitHub Actions / spell-check-differential

Unknown word (secfpn)

Check warning on line 146 in perception/lidar_centerpoint/README.md

View workflow job for this annotation

GitHub Actions / spell-check-differential

Unknown word (circlenms)
In this custom configuration, the **use_voxel_center_z parameter** is set to **False** to deactivate the z coordinate of the voxel center,
aligning with the original paper's specifications and making the model compatible with Autoware. Additionally, the filter size is set as **[32, 32]**.

The CenterPoint model can be tailored to your specific requirements by modifying various parameters within the configuration file.
This includes adjustments related to preprocessing operations, training, testing, model architecture, dataset, optimizer, learning rate scheduler, and more.

#### Start training

```bash
python tools/train.py configs/centerpoint/centerpoint_custom.py --work-dir ./work_dirs/centerpoint_custom
```

#### Evaluation of the trained model

For evaluation purposes, we have included a sample dataset captured from vehicle which consists of the following LiDAR sensors:
1 x Velodyne VLS128, 4 x Velodyne VLP16, and 1 x Robosense RS Bpearl. This dataset comprises 600 LiDAR frames and encompasses 5 distinct classes, 6905 cars, 3951 pedestrians,

Check warning on line 162 in perception/lidar_centerpoint/README.md

View workflow job for this annotation

GitHub Actions / spell-check-differential

Unknown word (Bpearl)
75 cyclists, 162 buses, and 326 trucks 3D annotation. In the sample dataset, frames annotatated as a 2 frame, each second. You can employ this dataset for a wide range of purposes,

Check warning on line 163 in perception/lidar_centerpoint/README.md

View workflow job for this annotation

GitHub Actions / spell-check-differential

Unknown word (annotatated)
including training, evaluation, and fine-tuning of models. It is organized in the Tier4Dataset format.

##### Download the sample dataset
```bash
TODO(kaancolak): add the link to the sample dataset

#Extract the dataset to a folder of your choice

#Create a symbolic link to the dataset folder
ln -s /PATH/TO/DATASET/ /PATH/TO/mmdetection3d/data/tier4_dataset/
```

##### Prepare dataset and evaluate trained model

Create .pkl files for the purposes of training, evaluation, and testing.

```bash

python tools/create_data.py Tier4Dataset --root-path data/sample_dataset/ --out-dir data/sample_dataset/ --extra-tag Tier4Dataset --version sample_dataset --annotation-hz 2
```

Run evaluation

```bash
python tools/test.py ./configs/centerpoint/test-centerpoint.py /PATH/OF/THE/CHECKPOINT --task lidar_det
```

Evaluation result could be relatively low because of the e to variations in sensor modalities between the sample dataset
and the training dataset. The model's training parameters are originally tailored to the NuScenes dataset, which employs a single lidar
sensor positioned atop the vehicle. In contrast, the provided sample dataset comprises concatenated point clouds positioned at
the base link location of the vehicle.

### Deploying CenterPoint model to Autoware

#### Convert CenterPoint PyTorch model to ONNX Format
The lidar_centerpoint implementation requires two ONNX models as input the voxel encoder and the backbone-neck-head of the CenterPoint model, other aspects of the network,
such as preprocessing operations, are implemented externally. Under the fork of the mmdetection3d repository,
we have included a script that converts the CenterPoint model to Autoware compitible ONNX format.

Check warning on line 201 in perception/lidar_centerpoint/README.md

View workflow job for this annotation

GitHub Actions / spell-check-differential

Unknown word (compitible)
You can find it in `mmdetection3d/tools/centerpoint_onnx_converter.py` file.


```bash
python tools/centerpoint_onnx_converter.py --cfg configs/centerpoint/centerpoint_custom.py --ckpt work_dirs/centerpoint_custom/YOUR_BEST_MODEL.pth -work-dir ./work_dirs/onnx_models
```

#### Create the config file for custom model

Create a new config file named **centerpoint_custom.param.yaml** under the config file directory of the lidar_centerpoint node. Sets the parameters of the config file like
point_cloud_range, point_feature_size, voxel_size, etc. according to the training config file.

```yaml
/**:
ros__parameters:
class_names: ["CAR", "TRUCK", "BUS", "BICYCLE", "PEDESTRIAN"]
point_feature_size: 4
max_voxel_size: 40000
point_cloud_range: [-51.2, -51.2, -3.0, 51.2, 51.2, 5.0,]
voxel_size: [0.2, 0.2, 8.0]
downsample_factor: 1
encoder_in_feature_size: 9
# post-process params
circle_nms_dist_threshold: 0.5
iou_nms_target_class_names: ["CAR"]
iou_nms_search_distance_2d: 10.0
iou_nms_threshold: 0.1
yaw_norm_thresholds: [0.3, 0.3, 0.3, 0.3, 0.0]
```
#### Launch the lidar_centerpoint node
```bash
cd /YOUR/AUTOWARE/PATH/Autoware
source install/setup.bash
ros2 launch lidar_centerpoint lidar_centerpoint.launch.xml model_name:=centerpoint_custom model_path:=/PATH/TO/ONNX/FILE/
```


### Changelog

#### v1 (2022/07/06)
Expand Down Expand Up @@ -145,3 +312,9 @@ Example:
[v1-head-centerpoint]: https://awf.ml.dev.web.auto/perception/models/centerpoint/v1/pts_backbone_neck_head_centerpoint.onnx
[v1-encoder-centerpoint-tiny]: https://awf.ml.dev.web.auto/perception/models/centerpoint/v1/pts_voxel_encoder_centerpoint_tiny.onnx
[v1-head-centerpoint-tiny]: https://awf.ml.dev.web.auto/perception/models/centerpoint/v1/pts_backbone_neck_head_centerpoint_tiny.onnx

## Legal Notice
*The nuScenes dataset is released publicly for non-commercial use under the Creative
Commons Attribution-NonCommercial-ShareAlike 4.0 International Public License.
Additional Terms of Use can be found at https://www.nuscenes.org/terms-of-use.
To inquire about a commercial license please contact [email protected].*

0 comments on commit 2abbfe1

Please sign in to comment.