Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

refactor(lidar_centerpoint): add training docs #5570

Conversation

kaancolak
Copy link
Contributor

@kaancolak kaancolak commented Nov 13, 2023

Description

Add training documentation about training lidar CenterPoint model.

Blocks merging of:

Blocked by:

Tests performed

Not applicable.

Effects on system behavior

Not applicable.

Pre-review checklist for the PR author

The PR author must check the checkboxes below when creating the PR.

In-review checklist for the PR reviewers

The PR reviewers must check the checkboxes below before approval.

Post-review checklist for the PR author

The PR author must check the checkboxes below before merging.

  • There are no open discussions or they are tracked via tickets.

After all checkboxes are checked, anyone who has write access can merge the PR.

@github-actions github-actions bot added type:documentation Creating or refining documentation. (auto-assigned) component:perception Advanced sensor data processing and environment understanding. (auto-assigned) labels Nov 13, 2023
@kaancolak kaancolak marked this pull request as ready for review December 7, 2023 06:38
@yukke42 yukke42 requested review from miursh and wep21 December 12, 2023 09:45
@kaancolak kaancolak self-assigned this Dec 13, 2023
@kminoda kminoda added run:build-and-test-differential Mark to enable build-and-test-differential workflow. (used-by-ci) and removed component:perception Advanced sensor data processing and environment understanding. (auto-assigned) run:build-and-test-differential Mark to enable build-and-test-differential workflow. (used-by-ci) labels Feb 19, 2024
@github-actions github-actions bot added the component:perception Advanced sensor data processing and environment understanding. (auto-assigned) label Feb 19, 2024
@Shin-kyoto Shin-kyoto requested review from Shin-kyoto and removed request for yukke42 February 19, 2024 12:12
Comment on lines +92 to +107
#### Install prerequisites

**Step 1.** Download and install Miniconda from the [official website](https://mmpretrain.readthedocs.io/en/latest/get_started.html).

**Step 2.** Create a conda virtual environment and activate it

```bash
conda create --name train-centerpoint python=3.8 -y
conda activate train-centerpoint
```

**Step 3.** Install PyTorch

Please ensure you have PyTorch installed, and compatible with CUDA 11.6, as it is a requirement for current Autoware.

```bash
conda install pytorch==1.13.1 torchvision==0.14.1 pytorch-cuda=11.6 -c pytorch -c nvidia
```

#### Install mmdetection3d

**Step 1.** Install MMEngine, MMCV, and MMDetection using MIM

```bash
pip install -U openmim
mim install mmengine
mim install 'mmcv>=2.0.0rc4'
mim install 'mmdet>=3.0.0rc5, <3.3.0'
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@kaancolak

Can you add dockerfile to make this environment in https://github.com/autowarefoundation/mmdetection3d.git?
We can make installation step shorter.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@Shin-kyoto -san, thank you for your advice!

Updated the current docker file with these installation steps.

https://github.com/autowarefoundation/mmdetection3d/pull/1/files#diff-f34da55ca08f1a30591d8b0b3e885bcc678537b2a9a4aadea4f190806b374ddcL1

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@kaancolak
Thank you so much!! Can you update this document to align with the environment setup procedures using Docker?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

perception/lidar_centerpoint/README.md Outdated Show resolved Hide resolved
perception/lidar_centerpoint/README.md Outdated Show resolved Hide resolved
perception/lidar_centerpoint/README.md Outdated Show resolved Hide resolved
@kaancolak kaancolak requested a review from kminoda as a code owner February 27, 2024 15:51
@kaancolak kaancolak requested a review from Shin-kyoto February 27, 2024 15:53
@knzo25 knzo25 removed the request for review from wep21 March 21, 2024 04:22
@xmfcx
Copy link
Contributor

xmfcx commented Mar 22, 2024

@kaancolak could you rebase this to latest main?

@xmfcx xmfcx added run:deploy-docs Mark for deploy-docs action generation. (used-by-ci) run:build-and-test-differential Mark to enable build-and-test-differential workflow. (used-by-ci) tag:pr-agent Mark to enable PR-Agent for automated reviews. (used-by-ci) labels Mar 22, 2024
@kaancolak kaancolak force-pushed the refactor/lidar_centerpoint-add-training-docs branch from 3871481 to 3345fd8 Compare March 26, 2024 12:00
@xmfcx
Copy link
Contributor

xmfcx commented Mar 26, 2024

/review

Copy link

PR Review

⏱️ Estimated effort to review [1-5]

4, due to the extensive addition of documentation covering a wide range of steps including environment setup, training, evaluation, and deployment of a model. The complexity and detail of the instructions require careful review to ensure accuracy and clarity.

🧪 Relevant tests

No

🔍 Possible issues

Possible Issue: The documentation does not specify the version of CUDA required for compatibility with PyTorch 1.13.1. This could lead to environment setup issues for users.

Possible Issue: The documentation assumes the user has prior knowledge of Docker, including building and running containers. This could be a barrier for users unfamiliar with Docker.

🔒 Security concerns

No

Code feedback:
relevant fileperception/lidar_centerpoint/README.md
suggestion      

Consider specifying the CUDA version compatible with PyTorch 1.13.1 to avoid potential environment setup issues. This clarification will help users ensure they have a compatible setup before proceeding with the installation. [important]

relevant lineconda install pytorch==1.13.1 torchvision==0.14.1 pytorch-cuda=11.6 -c pytorch -c nvidia

relevant fileperception/lidar_centerpoint/README.md
suggestion      

Add a brief introduction or link to Docker documentation for users who might be unfamiliar with Docker concepts such as building images and running containers. This will make the guide more accessible to a broader audience. [medium]

relevant lineAlternatively, you can use Docker to run the mmdetection3d repository.We provide a Dockerfile to build a Docker image with the mmdetection3d repository and its dependencies.


✨ Review tool usage guide:

Overview:
The review tool scans the PR code changes, and generates a PR review which includes several types of feedbacks, such as possible PR issues, security threats and relevant test in the PR. More feedbacks can be added by configuring the tool.

The tool can be triggered automatically every time a new PR is opened, or can be invoked manually by commenting on any PR.

  • When commenting, to edit configurations related to the review tool (pr_reviewer section), use the following template:
/review --pr_reviewer.some_config1=... --pr_reviewer.some_config2=...
[pr_reviewer]
some_config1=...
some_config2=...

See the review usage page for a comprehensive guide on using this tool.

@xmfcx
Copy link
Contributor

xmfcx commented Mar 26, 2024

/improve

Copy link

github-actions bot commented Mar 26, 2024

PR Code Suggestions

CategorySuggestions                                                                                                                                                       
Enhancement
Add a step to verify the installation of Miniconda.

Consider adding a step to verify the successful installation of Miniconda or any other
prerequisites. This can be done by running a simple command that checks the version or
lists the installed packages. This step will help users ensure that the installation was
successful before proceeding to the next steps.

perception/lidar_centerpoint/README.md [82]

 **Step 1.** Download and install Miniconda from the [official website](https://mmpretrain.readthedocs.io/en/latest/get_started.html).
+**Step 1.1.** Verify the installation by running `conda --version`.
 
Add explanations for Docker run command flags to aid user understanding.

For the Docker run command, it's beneficial to include an explanation for each flag used,
especially for users who might not be familiar with Docker. This helps in understanding
the purpose of each option and how it affects the container's behavior.

perception/lidar_centerpoint/README.md [142]

 docker run --gpus all --shm-size=8g -it -v {DATA_DIR}:/mmdetection3d/data mmdetection3d
+# Explanation:
+# --gpus all: Allows the container to use all available GPUs.
+# --shm-size=8g: Sets the shared memory size to 8GB, which is useful for data loading.
+# -it: Runs the container in interactive mode with a tty.
+# -v {DATA_DIR}:/mmdetection3d/data: Mounts the host directory {DATA_DIR} to the container's /mmdetection3d/data directory.
 
Add an explanation of the evaluation outcome and how to interpret it.

For the evaluation section, it would be beneficial to include a brief explanation of the
expected outcome or what constitutes a successful evaluation. This could include
information on the metrics used for evaluation, expected ranges for these metrics, or how
to interpret the results. This guidance can help users better understand the evaluation
process and what to look for in the results.

perception/lidar_centerpoint/README.md [210]

 Run evaluation
+# After running the evaluation, you will receive a report detailing the performance of the model on the dataset. Look for metrics such as precision, recall, and mAP (mean Average Precision) to gauge the model's accuracy. A higher mAP value indicates better performance.
 
Best practice
Specify the CUDA version compatible with PyTorch to avoid compatibility issues.

It's recommended to specify the version of CUDA that is compatible with the installed
version of PyTorch. This ensures that users install a version of CUDA that is tested and
guaranteed to work with the specified PyTorch version, avoiding potential compatibility
issues.

perception/lidar_centerpoint/README.md [96]

-conda install pytorch==1.13.1 torchvision==0.14.1 pytorch-cuda=11.6 -c pytorch -c nvidia
+conda install pytorch==1.13.1 torchvision==0.14.1 cudatoolkit=11.6 -c pytorch -c nvidia
 
Include a note about storage requirements for the NuScenes dataset.

When providing instructions for dataset preparation, it's helpful to include a note or
warning about the storage requirements. Large datasets like NuScenes can consume
significant disk space, and users should be made aware of this to ensure they have
adequate storage available before downloading and extracting the dataset.

perception/lidar_centerpoint/README.md [147]

 **Step 1.** Download the NuScenes dataset from the [official website](https://www.nuscenes.org/download) and extract the dataset to a folder of your choice.
+> **Note:** The NuScenes dataset is large and requires significant disk space. Ensure you have enough storage available before proceeding.
 

✨ Improve tool usage guide:

Overview:
The improve tool scans the PR code changes, and automatically generates suggestions for improving the PR code. The tool can be triggered automatically every time a new PR is opened, or can be invoked manually by commenting on a PR.

  • When commenting, to edit configurations related to the improve tool (pr_code_suggestions section), use the following template:
/improve --pr_code_suggestions.some_config1=... --pr_code_suggestions.some_config2=...
[pr_code_suggestions]
some_config1=...
some_config2=...

See the improve usage page for a comprehensive guide on using this tool.

Copy link

codecov bot commented Mar 26, 2024

Codecov Report

All modified and coverable lines are covered by tests ✅

Project coverage is 14.99%. Comparing base (e24f48d) to head (ef15632).
Report is 150 commits behind head on main.

Current head ef15632 differs from pull request most recent head bb2a69e

Please upload reports for the commit bb2a69e to get more accurate results.

Additional details and impacted files
@@            Coverage Diff             @@
##             main    #5570      +/-   ##
==========================================
- Coverage   15.10%   14.99%   -0.12%     
==========================================
  Files        1922     1933      +11     
  Lines      134796   133074    -1722     
  Branches    43473    39720    -3753     
==========================================
- Hits        20364    19951     -413     
+ Misses      91780    90888     -892     
+ Partials    22652    22235     -417     
Flag Coverage Δ *Carryforward flag
differential 0.00% <ø> (?)
total 14.99% <ø> (-0.12%) ⬇️ Carriedforward from 3345fd8

*This pull request uses carry forward flags. Click here to find out more.

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

Signed-off-by: M. Fatih Cırıt <[email protected]>
@Shin-kyoto
Copy link
Contributor

@kaancolak

It seems that there is an issue with spell check. Could you please resolve it?

@beginningfan beginningfan self-requested a review June 4, 2024 08:03
Copy link
Contributor

@beginningfan beginningfan left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I have reviewed this doc when I reviewed this PR. It looks good to me

@xmfcx xmfcx merged commit b1dd7bb into autowarefoundation:main Jun 13, 2024
24 checks passed
simon-eisenmann-driveblocks pushed a commit to simon-eisenmann-driveblocks/autoware.universe that referenced this pull request Jun 26, 2024
KhalilSelyan pushed a commit that referenced this pull request Jul 22, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
component:perception Advanced sensor data processing and environment understanding. (auto-assigned) run:build-and-test-differential Mark to enable build-and-test-differential workflow. (used-by-ci) run:deploy-docs Mark for deploy-docs action generation. (used-by-ci) tag:pr-agent Mark to enable PR-Agent for automated reviews. (used-by-ci) type:documentation Creating or refining documentation. (auto-assigned)
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants