-
Notifications
You must be signed in to change notification settings - Fork 665
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat(autoware_cuda_pointcloud_preprocessor): a cuda-accelerated pointcloud preprocessor #9454
base: main
Are you sure you want to change the base?
feat(autoware_cuda_pointcloud_preprocessor): a cuda-accelerated pointcloud preprocessor #9454
Conversation
…sonal repository Signed-off-by: Kenzo Lobos-Tsunekawa <[email protected]>
Thank you for contributing to the Autoware project! 🚧 If your pull request is in progress, switch it to draft mode. Please ensure:
|
Signed-off-by: Kenzo Lobos-Tsunekawa <[email protected]>
Signed-off-by: Kenzo Lobos-Tsunekawa <[email protected]>
Signed-off-by: Kenzo Lobos-Tsunekawa <[email protected]>
Signed-off-by: Kenzo Lobos-Tsunekawa <[email protected]>
Signed-off-by: Kenzo Lobos-Tsunekawa <[email protected]>
…pointcloud changes after the first iteration Signed-off-by: Kenzo Lobos-Tsunekawa <[email protected]>
…ntcloud_preprocessing
Signed-off-by: Kenzo Lobos-Tsunekawa <[email protected]>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you for the amazing PR, these performance improvements are desperately needed.
I haven't checked the PR for functionality yet, but I'll leave my first round of comments here.
The main points I'd like to address are
- memory safety and idiomatic C++ (there is currently a lot of raw-pointer code which should be avoided whenever possible)
- modulatiry: currently the pipeline is hard-coded and all in one place. This makes the module hard to adapt to different projects, and hard to maintain individual modules in the pipeline.
Thank you for your time!
|
||
The pointcloud preprocessing implemented in `autoware_pointcloud_preprocessor` has been thoroughly tested in autoware. However, the latency it introduces does not scale well with modern LiDAR devices due to the high number of points they introduce. | ||
|
||
To alleviate this issue, this package reimplements most of the pipeline presented in `autoware_pointcloud_preprocessor` leveraging the use of GPGPUs. In particular, this package makes use of CUDA to provide accelerated versions of the already establishes implementations, while also maintaining compatibility with normal ROS nodes/topics. <!-- cSpell: ignore GPGPUs > |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
To alleviate this issue, this package reimplements most of the pipeline presented in `autoware_pointcloud_preprocessor` leveraging the use of GPGPUs. In particular, this package makes use of CUDA to provide accelerated versions of the already establishes implementations, while also maintaining compatibility with normal ROS nodes/topics. <!-- cSpell: ignore GPGPUs > | |
To alleviate this issue, this package reimplements most of the pipeline presented in `autoware_pointcloud_preprocessor` leveraging the use of GPGPUs. In particular, this package makes use of CUDA to provide accelerated versions of the already established implementations, while also maintaining compatibility with normal ROS nodes/topics. <!-- cSpell: ignore GPGPUs > |
Also, arguably, GPGPUs could be added to the dictionary since it is quite a commonly used term.
self_crop.min_x: 1.0 | ||
self_crop.min_y: 1.0 | ||
self_crop.min_z: 1.0 | ||
self_crop.max_x: -1.0 | ||
self_crop.max_y: -1.0 | ||
self_crop.max_z: -1.0 | ||
mirror_crop.min_x: 1.0 | ||
mirror_crop.min_y: 1.0 | ||
mirror_crop.min_z: 1.0 | ||
mirror_crop.max_x: -1.0 | ||
mirror_crop.max_y: -1.0 | ||
mirror_crop.max_z: -1.0 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Instead of having two hard-coded crop-box filters here, a list would be more easily extensible and it should be quite straight-forward to change the implementation.
| Filter Name | Description | Detail | | ||
| --------------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------- | ------------------------------------------------- | | ||
| cuda_organized_pointcloud_adapter | Organizes a pointcloud per ring/channel, so that the memory layout allows parallel processing in cuda | [link](docs/cuda-organized-pointcloud-adapter.md) | | ||
| cuda_pointcloud_preprocessor | Implements the cropping, distortion correction, and outlier filtering (ring-based) of the `autoware_pointcloud_preprocessor`'s cpu versions. | [link](docs/cuda-pointcloud-preprocessor.md) | |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
| cuda_pointcloud_preprocessor | Implements the cropping, distortion correction, and outlier filtering (ring-based) of the `autoware_pointcloud_preprocessor`'s cpu versions. | [link](docs/cuda-pointcloud-preprocessor.md) | | |
| cuda_pointcloud_preprocessor | Implements the cropping, distortion correction, and outlier filtering (ring-based) of the `autoware_pointcloud_preprocessor`'s CPU versions. | [link](docs/cuda-pointcloud-preprocessor.md) | |
This node implements all standard pointcloud preprocessing algorithms applied to a single LiDAR's pointcloud in CUDA. | ||
In particular, this node implements: | ||
|
||
- crop boxing (ego-vehicle and ego-vehicle's mirrors) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
- crop boxing (ego-vehicle and ego-vehicle's mirrors) | |
- box cropping (ego-vehicle and ego-vehicle's mirrors) |
|
||
## Assumptions / Known limits | ||
|
||
- The CUDA implementations, while following the original CPU ones, will not offer the same numerical results, and small approximations were needed to maximize the GPU use. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
- The CUDA implementations, while following the original CPU ones, will not offer the same numerical results, and small approximations were needed to maximize the GPU use. | |
- The CUDA implementations, while following the original CPU ones, will not offer the same numerical results, and small approximations were needed to maximize GPU usage. |
|
||
std::size_t max_ring = 0; | ||
|
||
for (std::size_t i = 0; i < input_pointcloud_msg_ptr->width * input_pointcloud_msg_ptr->height; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Iteration without explicit bounds checking of the underlying array is not memory-safe. Thus, I would suggest using the abovementioned PointCloud2Iterators here.
num_rings_ = std::max(num_rings_, static_cast<std::size_t>(16)); | ||
std::vector<std::size_t> ring_points(num_rings_, 0); | ||
|
||
for (std::size_t i = 0; i < input_pointcloud_msg_ptr->width * input_pointcloud_msg_ptr->height; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Iteration without explicit bounds checking of the underlying array is not memory-safe. Thus, I would suggest using the abovementioned PointCloud2Iterators here.
max_ring = std::max(max_ring, ring); | ||
} | ||
|
||
// Set max rings to the next power of two |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Admittedly kind of a niche problem, but not all sensors (Pandar40P) have 2^n
rings.
Although auto-detecting the number of rings is nice, it has no hard guarantee to be accurate (e.g. the sensor is under a cover when turned on and there are thus no points in the cloud).
Does cuda_pointcloud_preprocessor support changing dimenions of input pointclouds across iterations (e.g. starts with 0 rings in cloud 1, then 64 rings with 2000 points, then 64 rings with 5000 points each)?
If not, I'd suggest to make n_rings
and max_points_per_ring
parameters so that we can guarantee correct behavior at runtime.
bool CudaOrganizedPointcloudAdapterNode::orderPointcloud( | ||
const sensor_msgs::msg::PointCloud2::ConstSharedPtr input_pointcloud_msg_ptr) | ||
{ | ||
const autoware::point_types::PointXYZIRCAEDT * input_buffer = |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Same comment about bounds/type checking as above 🙇
if (idx < num_points && masks[idx] == 1) { | ||
output_points[indices[idx] - 1] = input_points[idx]; | ||
} | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
These two functions are identical except for their argument types. Consider making one templated function instead.
Description
This PR is part of a series of PRs that aim to accelerate the Sensing/Perception pipeline through an appropriate use of CUDA.
List of PRs:
To use these branches, the following additions to the
autoware.repos
are necessary:Depending on your machine and how many nodes are in a container, the following branch may also be required:
https://github.com/knzo25/launch_ros/tree/fix/load_composable_node
There seems to be a but in ROS where if you send too many services at once some will be lost and
ros_launch
can not handle that.Related links
Parent Issue:
How was this PR tested?
The sensing/perception pipeline was tested until centerpoint for TIER IV's taxi using the logging simulator.
The following tests were executed in a laptop equipped with a RTX 4060 (laptop) GPU and a Intel(R) Core(TM) Ultra 7 165H (22 cores)
Notes for reviewers
The main branch that I used for development is
feat/cuda_acceleration_and_transport_layer
.However, the changes were too big so I split the PRs. That being said, development, if any will still be on that branch (and then cherrypicked to the respective PRs), and the review changes will be cherrypicked into the development branch.
Interface changes
An additional topic is added to perform type negotiation:
Example:
input/pointcloud
->input/pointcloud
andinput/pointcloud/cuda
Effects on system behavior
Enabling this preprocessing in the launchers should provide a much reduced latency and cpu usage (at the cost of a higher GPU usage)