This repository demonstrates the process of generating a segmented ROI pointcloud from reflective and range images captured by the Ouster LiDAR sensor. The pipeline utilizes the Ouster SDK, OpenCV, Open3D and ROS packages to convert sensor data into meaningful 3D point clouds.
The image below showcases YOLO-Pose detection applied on a LiDAR reflective image. The detected object's pose is accurately identified and the bounding box will be used as a mask.
The following image represents the raw point cloud data generated from the Ouster LiDAR sensor. This serves as the baseline for further processing and segmentation.
This image illustrates the segmented point cloud data of a detected object. The segmentation was achieved by mapping the reflective image to the corresponding range image. Utilizing a lookup table derived from the sensor's metadata I mapped the segmented range_image pixels to the 3d point cloud.
The image below shows the combined point cloud, including the original and segmented point clouds. This visualization highlights the effectiveness of the segmentation process in isolating objects within the 3D space.
This project demonstrates 3D point cloud segmentation by utilizing object detection on its 2D images, which is less computationally intensive and provides fast and accurate results.