17/Aug/2023, Code released. Spent some time to make my code more user-friendly 😤, feel free to contact me with any questions.
We are confident that this work introduced a novel approach to LiDAR SLAM, and we welcome everyone to explore opportunities in this approach. 👬 Please raise issues here so that I can get reminders immediately.
10/Mar/2023, Preprint of our paper can be found on: paper.
16/Jan/2023, The paper has been accepted for presentation on ICRA 2023.
15/Sep/2022, The paper has been submitted to ICRA 2023.
This work designs a Simultaneously Localization And Meshing system (SLAMesh). Mesh is a lightweight 3-D dense model. It can model complex structures and has the feasibility for rendering. We bridge localization with meshing at the same time to benefit each other.
- Build, register, and update the mesh maps in real time with CPU resources. The experiments show that our SLAMesh can run at around 40 Hz.
- Provide accurate odometry. The localization and meshing accuracy also outperforms the state-of-the-art methods.
- Different from point-cloud (LOAM), NDT, and Surfel map SLAM, this work has established a new approach to LiDAR SLAM.
- The key idea is that we conduct a reconstruction of the raw point cloud before registration. This strategy enables fast meshing, data-association without the kd-tree.
Author: Jianyuan Ruan, Bo Li, Yibo Wang, Yuxiang Sun.
On public dataset:
(or watch it on bilibili)
On self-collected dataset:
(or watch it on bilibili)
If you find our research helpful to your work, please cite our paper:
[1] Jianyuan Ruan, Bo Li, Yibo Wang, and Yuxiang Sun, "SLAMesh: Real-time LiDAR Simultaneous Localization and Meshing" ICRA 2023 (pdf, IEEE).
@INPROCEEDINGS{10161425,
author={Ruan, Jianyuan and Li, Bo and Wang, Yibo and Sun, Yuxiang},
booktitle={2023 IEEE International Conference on Robotics and Automation (ICRA)},
title={SLAMesh: Real-time LiDAR Simultaneous Localization and Meshing},
year={2023},
volume={},
number={},
pages={3546-3552},
doi={10.1109/ICRA48891.2023.10161425}}
Other related papers:
[2] Jianyuan Ruan, Bo Li, Yinqiang Wang and Zhou Fang, "GP-SLAM+: real-time 3D lidar SLAM based on improved regionalized Gaussian process map reconstruction," IROS 2020. link.
[3] Bo Li, Jianyuan Ruan, Yu Zhang, et al, 3D SLAM method based on improved regionalized Gaussian process map construction, 2020 International Conference on Guidance, Navigation and Control (ICGNC), 2022. link
[4] Jianyuan Ruan, Zhou Fang, Bo Li, et al, Evaluation of GP-SLAM in real-world environments, 2019 Chinese Automation Congress (CAC), 2019. link
[5] Bo Li, Yinqiang Wang, Yu Zhang. Wenjie Zhao, Jianyuan Ruan, and Pin Li, "GP-SLAM: laser-based SLAM approach based on regionalized Gaussian process map reconstruction". Auton Robot 2020.link
If you understand Chinese, you can also refer to my Master's thesis, an article on the WeChat platform: SLAMesh: 实时LiDAR定位与网格化模型构建 , and a talk in 自动驾驶之心and计算机视觉life.
We tested our code in Ubuntu18.04 with ROS melodic and Ubuntu20.04 with ROS neotic.
ROS
Install ros following ROS Installation. We use the PCL and Eigen library in ROS.
Ceres
Follow Ceres Installation to install Ceres Solver, tested version: 2.0, 2.1 (Error observed with V-2.2).
mesh_tools
We use mesh_tools to visualize the mesh map with the mesh_msgs::MeshGeometryStamped
ROS message. mesh_tools also incorporates navigation functions upon mesh map. Mesh tool introduction
Install mesh_tools by:
- Install lvr2:
sudo apt-get install build-essential \
cmake cmake-curses-gui libflann-dev \
libgsl-dev libeigen3-dev libopenmpi-dev \
openmpi-bin opencl-c-headers ocl-icd-opencl-dev \
libboost-all-dev \
freeglut3-dev libhdf5-dev qtbase5-dev \
qt5-default libqt5opengl5-dev liblz4-dev \
libopencv-dev libyaml-cpp-dev
In Ubuntu18.04, use libvtk6
because libvtk7
will conflict with pcl-ros
in melodic.
sudo apt-get install libvtk6-dev libvtk6-qt-dev
In Ubuntu 20.04,
sudo apt-get install libvtk7-dev libvtk7-qt-dev
then:
cd a_non_ros_dir
build:
git clone https://github.com/uos/lvr2.git
cd lvr2
mkdir build && cd build
cmake .. && make
sudo make install
It may take you some time.
- Install mesh_tools, (I can not install it from official ROS repos now, so I build it from source)
mkdir -p ./slamesh_ws/src
cd slamesh_ws/src
git clone https://github.com/naturerobots/mesh_tools.git
cd ..
rosdep update
rosdep install --from-paths src --ignore-src -r -y
catkin_make
source devel/setup.bash
Clone this repository and build:
cd slamesh_ws/src
git clone https://github.com/RuanJY/SLAMesh.git
cd .. && catkin_make
mkdir slamesh_result
source ~/slamesh_ws/src/devel/setup.bash
If you encounter some trouble with prerequisites, the problem may lay down on the prerequisite; we advise you to use our docker image:
docker pull pleaserun/rjy_slam_work:slamesh_18.04
After cloning the image, remember to run it with correct path of dataset via option -v
, like:
docker run -it -p 5900:5900 -p 2222:22 -e RESOLUTION=1920x1080 \
-v path_in_your_PC:/root/dataset \
--name test_slamesh \
pleaserun/rjy_slam_work:slamesh_18.04
Then you can use VNC to enter a graphical interface via port 5900, or use ssh to connect container via port 2222.
Move to the dictionary slamesh_ws/src
, and complete step after 2.1.
The dataset is available at KITTI dataset.
Set the parameter data_path
in slamesh_kitti.launch
to your folder of kitti dataset path
The file tree should be like this:
file_loc_dataset
├── 00
| └──velodyne
| ├── 000000.bin
| └── ...
└──01
For example, if you want to run the 07 sequence:
roslaunch slamesh slamesh_kitti_meshing.launch seq:=/07
You should get:
If you can not see the mesh, check that the Rviz plugin is sourced correctly. When mesh_visualization
is disabled, only vertices are published as a point cloud.
The dataset is available at Mai City Dataset. Sequence 01 can be fed into SLAM and sequence 02 can be accumulated into a dense ground truth point cloud map.
Similarly, set the parameter data_path
in slamesh_maicity.launch
to your folder of the kitti dataset path.
roslaunch slamesh slamesh_maicity.launch seq:=/01
You should get:
roslaunch slamesh slamesh_online.launch
And play your bag, in launch file, remap the topic "/velodyne_points" to your LiDAR topic like "/os_cloud_node/points".
rosbag play your_bag.bag
The number of LiDAR channels does not matter because our algorithm does not extract features.
You can use our sample data recorded with an Ouster OS1-32 LiDAR: SLAMesh dataset.
SLAMesh saves all its report to the path result_path
given in each launch file. If you find ros warning: Can not open Report file
, create the folder of result_path
first.
The file 0x_pred.txt
is the KITTI format path. I use KITTI odometry evaluation tool
for evaluation:
cd slamesh_ws/slamesh_result
git clone https://github.com/LeoQLi/KITTI_odometry_evaluation_tool
cd KITTI_odometry_evaluation_tool/
python evaluation.py --result_dir=.. --eva_seqs=07.pred
Run SLAMesh by:
roslaunch slamesh slamesh_kitti_odometry.launch seq:=/07
Currently, the result on the KITTI odometry benchmark is:
Sequence | 00 | 01 | 02 | 03 | 04 | 05 | 06 | 07 | 08 | 09 | 10 | Average |
---|---|---|---|---|---|---|---|---|---|---|---|---|
Translation (%) | 0.771 | 1.2519 | 0.7742 | 0.6366 | 0.5044 | 0.5182 | 0.5294 | 0.3607 | 0.8745 | 0.573 | 0.6455 | 0.6763 |
Rotation (deg/m) | 0.0035 | 0.003 | 0.003 | 0.0043 | 0.0013 | 0.003 | 0.0022 | 0.0023 | 0.0027 | 0.0025 | 0.0042 | 0.0029 |
Notice that to achieve better KITTI odometry performance, the parameter in slamesh_kitti_meshing.launch
are set as followed:
full_cover: false # due to discontinuity phenomenon between cells, shirnk the test locations can improve accuracy.
num_margin_old_cell: 500 # margin old cells, because KITTI evaluate odometry accuracy rather than consistency.
However, if you want to have better meshing result, they should be: (in launch slamesh_kitti_meshing.launch
)
full_cover: true # so that there is no gap between cells.
num_margin_old_cell: -1 # do not margin old cells, the cell-based map will have implicit loop closure effect.
To save the mesh map, set parameter save_mesh_map
in yaml file to true
. A ply file should be saved in slamesh_ws/slamesh_result
.
I use the TanksAndTemples/evaluation
tool to evaluate the mesh. I slightly modify it (remove trajectory). You can find it here: TanksAndTemples/evaluation_rjy
Then compare the mesh with the ground-truth point cloud map:
cd TanksAndTemples_direct_rjy/python_toolbox/evaluation/
python run.py \
--dataset-dir ./data/ground_truth_point_cloud.ply \
--traj-path ./ \
--ply-path ./data/your_mesh.ply
***_report.txt
record time cost and other logs.
TODO
Author: Jianyuan Ruan, Bo Li, Yibo Wang, Yuxiang Sun.
Email: [email protected], [email protected], [email protected], [email protected]
Most of the code are build from scratch , but we also want to acknowledge the following open-source projects:
TanksAndTemples/evaluation: evaluation
A-LOAM: kitti dataset loader and tic-toc
F-LOAM: help me to write the ceres residuals
VGICP: multi-thread