-
Notifications
You must be signed in to change notification settings - Fork 676
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
refactor(lidar_centerpoint): add training docs #5570
Merged
xmfcx
merged 16 commits into
autowarefoundation:main
from
kaancolak:refactor/lidar_centerpoint-add-training-docs
Jun 13, 2024
Merged
Changes from all commits
Commits
Show all changes
16 commits
Select commit
Hold shift + click to select a range
6ea3577
refactor(lidar_centerpoint): add training docs
kaancolak 20ef77c
style(pre-commit): autofix
pre-commit-ci[bot] 48afb5c
refactor(lidar_centerpoint): and link and small fix
kaancolak 06ab87b
refactor(lidar_centerpoint): update docs.
kaancolak b7d671a
fix(lidar_centerpoint): change dataset name
kaancolak 808a7c7
fix(lidar_centerpoint): add docker instruction
kaancolak 3345fd8
style(pre-commit): autofix
pre-commit-ci[bot] 2f780bd
fix(lidar_centerpoint): fix spell
kaancolak ef15632
small fixes and spellcheck
983c800
Merge branch 'main' into refactor/lidar_centerpoint-add-training-docs
kaancolak b3b609a
docs(lidar_centerpoint): add version
kaancolak 8475055
Merge branch 'main' into refactor/lidar_centerpoint-add-training-docs
kaancolak e3f7194
Update README.md
kaancolak 26f028c
docs(lidar_centerpoint): add link
kaancolak b0fad69
Merge branch 'main' into refactor/lidar_centerpoint-add-training-docs
kaancolak bb2a69e
Merge branch 'main' into refactor/lidar_centerpoint-add-training-docs
kaancolak File filter
Filter by extension
Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -64,12 +64,207 @@ ros2 launch lidar_centerpoint lidar_centerpoint.launch.xml model_name:=centerpoi | |
|
||
You can download the onnx format of trained models by clicking on the links below. | ||
|
||
- Centerpoint : [pts_voxel_encoder_centerpoint.onnx](https://awf.ml.dev.web.auto/perception/models/centerpoint/v2/pts_voxel_encoder_centerpoint.onnx), [pts_backbone_neck_head_centerpoint.onnx](https://awf.ml.dev.web.auto/perception/models/centerpoint/v2/pts_backbone_neck_head_centerpoint.onnx) | ||
- Centerpoint: [pts_voxel_encoder_centerpoint.onnx](https://awf.ml.dev.web.auto/perception/models/centerpoint/v2/pts_voxel_encoder_centerpoint.onnx), [pts_backbone_neck_head_centerpoint.onnx](https://awf.ml.dev.web.auto/perception/models/centerpoint/v2/pts_backbone_neck_head_centerpoint.onnx) | ||
- Centerpoint tiny: [pts_voxel_encoder_centerpoint_tiny.onnx](https://awf.ml.dev.web.auto/perception/models/centerpoint/v2/pts_voxel_encoder_centerpoint_tiny.onnx), [pts_backbone_neck_head_centerpoint_tiny.onnx](https://awf.ml.dev.web.auto/perception/models/centerpoint/v2/pts_backbone_neck_head_centerpoint_tiny.onnx) | ||
|
||
`Centerpoint` was trained in `nuScenes` (~28k lidar frames) [8] and TIER IV's internal database (~11k lidar frames) for 60 epochs. | ||
`Centerpoint tiny` was trained in `Argoverse 2` (~110k lidar frames) [9] and TIER IV's internal database (~11k lidar frames) for 20 epochs. | ||
|
||
## Training CenterPoint Model and Deploying to the Autoware | ||
|
||
### Overview | ||
|
||
This guide provides instructions on training a CenterPoint model using the **mmdetection3d** repository | ||
and seamlessly deploying it within Autoware. | ||
|
||
### Installation | ||
|
||
#### Install prerequisites | ||
|
||
**Step 1.** Download and install Miniconda from the [official website](https://mmpretrain.readthedocs.io/en/latest/get_started.html). | ||
|
||
**Step 2.** Create a conda virtual environment and activate it | ||
|
||
```bash | ||
conda create --name train-centerpoint python=3.8 -y | ||
conda activate train-centerpoint | ||
``` | ||
|
||
**Step 3.** Install PyTorch | ||
|
||
Please ensure you have PyTorch installed, and compatible with CUDA 11.6, as it is a requirement for current Autoware. | ||
|
||
```bash | ||
conda install pytorch==1.13.1 torchvision==0.14.1 pytorch-cuda=11.6 -c pytorch -c nvidia | ||
``` | ||
|
||
#### Install mmdetection3d | ||
|
||
**Step 1.** Install MMEngine, MMCV, and MMDetection using MIM | ||
|
||
```bash | ||
pip install -U openmim | ||
mim install mmengine | ||
mim install 'mmcv>=2.0.0rc4' | ||
mim install 'mmdet>=3.0.0rc5, <3.3.0' | ||
``` | ||
|
||
**Step 2.** Install mmdetection3d forked repository | ||
|
||
Introduced several valuable enhancements in our fork of the mmdetection3d repository. | ||
Notably, we've made the PointPillar z voxel feature input optional to maintain compatibility with the original paper. | ||
In addition, we've integrated a PyTorch to ONNX converter and a T4 format reader for added functionality. | ||
|
||
```bash | ||
git clone https://github.com/autowarefoundation/mmdetection3d.git | ||
cd mmdetection3d | ||
pip install -v -e . | ||
``` | ||
|
||
#### Use Training Repository with Docker | ||
|
||
Alternatively, you can use Docker to run the mmdetection3d repository. We provide a Dockerfile to build a Docker image with the mmdetection3d repository and its dependencies. | ||
|
||
Clone fork of the mmdetection3d repository | ||
|
||
```bash | ||
git clone https://github.com/autowarefoundation/mmdetection3d.git | ||
``` | ||
|
||
Build the Docker image by running the following command: | ||
|
||
```bash | ||
cd mmdetection3d | ||
docker build -t mmdetection3d -f docker/Dockerfile . | ||
``` | ||
|
||
Run the Docker container: | ||
|
||
```bash | ||
docker run --gpus all --shm-size=8g -it -v {DATA_DIR}:/mmdetection3d/data mmdetection3d | ||
``` | ||
|
||
### Preparing NuScenes dataset for training | ||
|
||
**Step 1.** Download the NuScenes dataset from the [official website](https://www.nuscenes.org/download) and extract the dataset to a folder of your choice. | ||
|
||
**Note:** The NuScenes dataset is large and requires significant disk space. Ensure you have enough storage available before proceeding. | ||
|
||
**Step 2.** Create a symbolic link to the dataset folder | ||
|
||
```bash | ||
ln -s /path/to/nuscenes/dataset/ /path/to/mmdetection3d/data/nuscenes/ | ||
``` | ||
|
||
**Step 3.** Prepare the NuScenes data by running: | ||
|
||
```bash | ||
cd mmdetection3d | ||
python tools/create_data.py nuscenes --root-path ./data/nuscenes --out-dir ./data/nuscenes --extra-tag nuscenes | ||
``` | ||
|
||
### Training CenterPoint with NuScenes Dataset | ||
|
||
#### Prepare the config file | ||
|
||
The configuration file that illustrates how to train the CenterPoint model with the NuScenes dataset is | ||
located at `mmdetection3d/projects/AutowareCenterPoint/configs`. This configuration file is a derived version of | ||
[this centerpoint configuration file](https://github.com/autowarefoundation/mmdetection3d/blob/5c0613be29bd2e51771ec5e046d89ba3089887c7/configs/centerpoint/centerpoint_pillar02_second_secfpn_head-circlenms_8xb4-cyclic-20e_nus-3d.py) | ||
from mmdetection3D. | ||
In this custom configuration, the **use_voxel_center_z parameter** is set as **False** to deactivate the z coordinate of the voxel center, | ||
aligning with the original paper's specifications and making the model compatible with Autoware. Additionally, the filter size is set as **[32, 32]**. | ||
|
||
The CenterPoint model can be tailored to your specific requirements by modifying various parameters within the configuration file. | ||
This includes adjustments related to preprocessing operations, training, testing, model architecture, dataset, optimizer, learning rate scheduler, and more. | ||
|
||
#### Start training | ||
|
||
```bash | ||
python tools/train.py projects/AutowareCenterPoint/configs/centerpoint_custom.py --work-dir ./work_dirs/centerpoint_custom | ||
``` | ||
|
||
#### Evaluation of the trained model | ||
|
||
For evaluation purposes, we have included a sample dataset captured from the vehicle which consists of the following LiDAR sensors: | ||
1 x Velodyne VLS128, 4 x Velodyne VLP16, and 1 x Robosense RS Bpearl. This dataset comprises 600 LiDAR frames and encompasses 5 distinct classes, 6905 cars, 3951 pedestrians, | ||
75 cyclists, 162 buses, and 326 trucks 3D annotation. In the sample dataset, frames are annotated as 2 frames for each second. You can employ this dataset for a wide range of purposes, | ||
including training, evaluation, and fine-tuning of models. It is organized in the T4 format. | ||
|
||
##### Download the sample dataset | ||
|
||
```bash | ||
wget https://autoware-files.s3.us-west-2.amazonaws.com/dataset/lidar_detection_sample_dataset.tar.gz | ||
#Extract the dataset to a folder of your choice | ||
tar -xvf lidar_detection_sample_dataset.tar.gz | ||
#Create a symbolic link to the dataset folder | ||
ln -s /PATH/TO/DATASET/ /PATH/TO/mmdetection3d/data/tier4_dataset/ | ||
``` | ||
|
||
##### Prepare dataset and evaluate trained model | ||
|
||
Create `.pkl` files for training, evaluation, and testing. | ||
|
||
The dataset was formatted according to T4Dataset specifications, with 'sample_dataset' designated as one of its versions. | ||
|
||
```bash | ||
python tools/create_data.py T4Dataset --root-path data/sample_dataset/ --out-dir data/sample_dataset/ --extra-tag T4Dataset --version sample_dataset --annotation-hz 2 | ||
``` | ||
|
||
Run evaluation | ||
|
||
```bash | ||
python tools/test.py projects/AutowareCenterPoint/configs/centerpoint_custom_test.py /PATH/OF/THE/CHECKPOINT --task lidar_det | ||
``` | ||
|
||
Evaluation results could be relatively low because of the e to variations in sensor modalities between the sample dataset | ||
and the training dataset. The model's training parameters are originally tailored to the NuScenes dataset, which employs a single lidar | ||
sensor positioned atop the vehicle. In contrast, the provided sample dataset comprises concatenated point clouds positioned at | ||
the base link location of the vehicle. | ||
|
||
### Deploying CenterPoint model to Autoware | ||
|
||
#### Convert CenterPoint PyTorch model to ONNX Format | ||
|
||
The lidar_centerpoint implementation requires two ONNX models as input the voxel encoder and the backbone-neck-head of the CenterPoint model, other aspects of the network, | ||
such as preprocessing operations, are implemented externally. Under the fork of the mmdetection3d repository, | ||
we have included a script that converts the CenterPoint model to Autoware compatible ONNX format. | ||
You can find it in `mmdetection3d/projects/AutowareCenterPoint` file. | ||
|
||
```bash | ||
python projects/AutowareCenterPoint/centerpoint_onnx_converter.py --cfg projects/AutowareCenterPoint/configs/centerpoint_custom.py --ckpt work_dirs/centerpoint_custom/YOUR_BEST_MODEL.pth --work-dir ./work_dirs/onnx_models | ||
``` | ||
|
||
#### Create the config file for the custom model | ||
|
||
Create a new config file named **centerpoint_custom.param.yaml** under the config file directory of the lidar_centerpoint node. Sets the parameters of the config file like | ||
point_cloud_range, point_feature_size, voxel_size, etc. according to the training config file. | ||
|
||
```yaml | ||
/**: | ||
ros__parameters: | ||
class_names: ["CAR", "TRUCK", "BUS", "BICYCLE", "PEDESTRIAN"] | ||
point_feature_size: 4 | ||
max_voxel_size: 40000 | ||
point_cloud_range: [-51.2, -51.2, -3.0, 51.2, 51.2, 5.0] | ||
voxel_size: [0.2, 0.2, 8.0] | ||
downsample_factor: 1 | ||
encoder_in_feature_size: 9 | ||
# post-process params | ||
circle_nms_dist_threshold: 0.5 | ||
iou_nms_target_class_names: ["CAR"] | ||
iou_nms_search_distance_2d: 10.0 | ||
iou_nms_threshold: 0.1 | ||
yaw_norm_thresholds: [0.3, 0.3, 0.3, 0.3, 0.0] | ||
``` | ||
|
||
#### Launch the lidar_centerpoint node | ||
|
||
```bash | ||
cd /YOUR/AUTOWARE/PATH/Autoware | ||
source install/setup.bash | ||
ros2 launch lidar_centerpoint lidar_centerpoint.launch.xml model_name:=centerpoint_custom model_path:=/PATH/TO/ONNX/FILE/ | ||
``` | ||
|
||
### Changelog | ||
|
||
#### v1 (2022/07/06) | ||
|
@@ -144,3 +339,14 @@ Example: | |
[v1-head-centerpoint]: https://awf.ml.dev.web.auto/perception/models/centerpoint/v1/pts_backbone_neck_head_centerpoint.onnx | ||
[v1-encoder-centerpoint-tiny]: https://awf.ml.dev.web.auto/perception/models/centerpoint/v1/pts_voxel_encoder_centerpoint_tiny.onnx | ||
[v1-head-centerpoint-tiny]: https://awf.ml.dev.web.auto/perception/models/centerpoint/v1/pts_backbone_neck_head_centerpoint_tiny.onnx | ||
|
||
## Acknowledgment: deepen.ai's 3D Annotation Tools Contribution | ||
|
||
Special thanks to [Deepen AI](https://www.deepen.ai/) for providing their 3D Annotation tools, which have been instrumental in creating our sample dataset. | ||
|
||
## Legal Notice | ||
|
||
_The nuScenes dataset is released publicly for non-commercial use under the Creative | ||
Commons Attribution-NonCommercial-ShareAlike 4.0 International Public License. | ||
Additional Terms of Use can be found at <https://www.nuscenes.org/terms-of-use>. | ||
To inquire about a commercial license please contact [email protected]._ |
Oops, something went wrong.
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@kaancolak
Can you add dockerfile to make this environment in https://github.com/autowarefoundation/mmdetection3d.git?
We can make installation step shorter.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@Shin-kyoto -san, thank you for your advice!
Updated the current docker file with these installation steps.
https://github.com/autowarefoundation/mmdetection3d/pull/1/files#diff-f34da55ca08f1a30591d8b0b3e885bcc678537b2a9a4aadea4f190806b374ddcL1
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@kaancolak
Thank you so much!! Can you update this document to align with the environment setup procedures using Docker?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@Shin-kyoto sure, I added instructions to here:
https://github.com/autowarefoundation/autoware.universe/pull/5570/files#diff-ef509aa43435872925d0134c14c088da6f51549d59e84cc5bd3e74fd4fa333f9R134