Real-time and efficient polygonal mapping designed for humanoid robots.
- Tested on
Ubuntu 20.04 / ROS Noetic
This package depends on
- pyrealsense2
- cupy
- ...
It is assumed that ROS is installed.
- Clone to your catkin_ws
mkdir -p catkin_ws/src
cd catkin_ws/src
git clone https://github.com/BTFrontier/polygon_mapping.git
-
Install dependent packages.
-
Build a package.
cd catkin_ws
catkin build polygon_mapping
You can download the test dataset here.
Once downloaded, extract the dataset_39
and dataset_71
folders into the following directory:
catkin_ws/src/polygon_mapping/data
Now, you can run the test case with the following command:
rosrun polygon_mapping read_dataset.py
You can modify the dataset directory in read_dataset.py
to use the dataset of your choice. During the testing process, intermediate images will be saved in the processed
subdirectory within the dataset directory, allowing you to review the results later.
In addition to the datasets, the system can be run on your own depth camera. By default, the code retrieves depth images from a RealSense camera. If you're using the L515, you can configure additional LiDAR parameters. If you're using another depth camera, you may need to modify the depth image input accordingly. Before running the mapping program, you can configure various parameters in the config/config_param.yaml
file.
First, ensure that the robot has an odometry system or another method for estimating its own state. Then, using the relative pose of the depth camera to the odometry, you need to send the depth camera's pose in the odometry coordinate frame via ROS tf in real-time. The configuration for the odometry frame and depth camera frame listener can be modified in the config/config_param.yaml
file.
This section includes parameters such as the serial number of the RealSense depth camera (important when using multiple cameras), depth image resolution, camera intrinsic parameters, etc. For the L515, adjusting several parameters can help improve the quality of the depth map.
This includes image processing parameters and RANSAC parameters. These parameters influence the quality and real-time performance of the output and can be adjusted based on your needs.
Once all parameters are configured, you can run the mapping program with the following command:
rosrun polygon_mapping main.py
If you find this code useful in your research, please consider citing our paper: Real-Time Polygonal Semantic Mapping for Humanoid Robot Stair Climbing
@misc{bin2024realtimepolygonalsemanticmapping,
title={Real-Time Polygonal Semantic Mapping for Humanoid Robot Stair Climbing},
author={Teng Bin and Jianming Yao and Tin Lun Lam and Tianwei Zhang},
year={2024},
eprint={2411.01919},
archivePrefix={arXiv},
primaryClass={cs.RO},
url={https://arxiv.org/abs/2411.01919},
}