Skip to content

WHU-USI3DV/LaneMapping

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

21 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

This is the official PyTorch implementation of the following publication:


A Benchmark Approach and Dataset for Large-scale Lane Mapping from MLS Point Clouds

Xiaoxin Mi, Zhen Dong, Zhipeng Cao, Bisheng Yang,Zhen Cao, Chao Zheng, Jantien Stoter, Liangliang Nan

International Journal of Applied Earth Observation and Geoinformation(JAG) 2024
Paper | Project-page | Video

🔭 Introduction

A Benchmark Approach and Dataset for Large-scale Lane Mapping from MLS Point Clouds

Network

Abstract: Accurate lane maps with semantics are crucial for various applications, such as high-definition maps (HD Maps), intelligent transportation systems (ITS), and digital twins. Manual annotation of lanes is labor-intensive and costly, prompting researchers to explore automatic lane extraction methods. This paper presents an end-to-end large-scale lane mapping method that considers both lane geometry and semantics. This study represents lane markings as polylines with uniformly sampled points and associated semantics, allowing for adaptation to varying lane shapes. Additionally, we propose an end-to-end network to extract lane polylines from mobile laser scanning (MLS) data, enabling the inference of vectorized lane instances without complex post-processing. The network consists of three components: a feature encoder, a column proposal generator, and a lane information decoder. The feature encoder encodes textual and structural information of lane markings to enhance the method's robustness to data imperfections, such as varying lane intensity, uneven point density, and occlusion-induced incomplete data. The column proposal generator generates regions of interest for the subsequent decoder. Leveraging the embedded multi-scale features from the feature encoder, the lane decoder effectively predicts lane polylines and their associated semantics without requiring step-by-step conditional inference. Comprehensive experiments conducted on three lane datasets have demonstrated the performance of the proposed method, even in the presence of incomplete data and complex lane topology.

🆕 News

  • 2024-09-10: [LaneMapping] code and dataset are publicly accessible! 🎉
  • 2024-09-02: our paper is accepted for publication in the International Journal of Applied Earth Observation and Geoinformation(JAG)! 🎉

💻 Requirements

The code has been trained on:

  • Ubuntu 20.04
  • CUDA 11.8 (Other versions should be okay.)
  • Python 3.8
  • Pytorch 2.1.0
  • A GeForce RTX 4090.

🔧 Installation

a. Create a conda virtual environment and activate it.

conda create -n lanemap python=3.8 -y  
conda activate lanemap

b. Install PyTorch and torchvision following the official instructions.

pip install torch==2.1.0 torchvision==0.16.0 torchaudio==2.1.0 --index-url https://download.pytorch.org/whl/cu118

c. Install mmcv-full. (optional: to validate LidarEncoder: SparseConv)

follow the instructions here: https://mmdetection3d.readthedocs.io/en/latest/get_started.html

pip install -U openmim
mim install mmengine
mim install 'mmcv>=2.0.0rc4'
mim install 'mmdet>=3.0.0'
git clone https://github.com/open-mmlab/mmdetection3d.git -b dev-1.x
# "-b dev-1.x" means checkout to the `dev-1.x` branch.
cd mmdetection3d
pip install -v -e .
# "-v" means verbose, or more output
# "-e" means installing a project in edtiable mode,
# thus any local modifications made to the code will take effect without reinstallation.

d. install other third party libs

pip install einops==0.8.0
pip install timm
pip install laspy  # to load LiDAR points
pip install pytorch_warmup

Train and Test

export PATH=$PATH:/path/to/your/lanemap/dir  # set the work path

💾 Datasets

We used WHU-Lane for training and three datasets for evaluation.

WHU-Lane

WHU-Lane data structure is as follows:

mkdir data
cd data
ln -s /path/to/downloaded/WHU-Lane .
WHU-LaserLane
├── TrainValAll
│    ├── cropped_tiff
│    │    ├── 000000_0001.png
|    |    ├── 000000_0002.png
│    │    └── ...
│    ├── labels
│    │    ├── sparse_seq
|    |    ├── sparse_instance
|    |    ├── sparse_orient
|    |    ├── sparse_endp
│    │    └── sparse_semantic
│    └── ...
├── TestArea1
│    ├── cropped_tiff
│    │    ├── 000000_0001.png
|    |    ├── 000000_0002.png
│    │    └── ...
│    ├── labels
│    │    ├── sparse_seq
|    |    ├── sparse_instance
|    |    ├── sparse_orient
|    |    ├── sparse_endp
│    │    └── sparse_semantic
│    └── ...
└── TestArea2
     ├── cropped_tiff
     │    ├── 000000_0001.png
     |    ├── 000000_0002.png
     │    └── ...
     ├── labels
     │    ├── sparse_seq
     |    ├── sparse_instance
     |    ├── sparse_orient
     |    ├── sparse_endp
     │    └── sparse_semantic
     └── ...

🚅 Pretrained model and WHU-LaserLane Dataset

You can download the pre-trained model fromBaiduDisk or GoogleDrive, and put it in folder logs/ and unzip it.

You can download only the generated BEV images and corresponding annotations of WHU-LaserLane dataset fromBaiduDisk or GoogleDrive, and put it in folder ./data/ and unzip it.

Or the whole WHU-LaserLane dataset from BaiduDisk.

⏳ Train

To train the LaneMapping network, you should prepare the dataset and put it in './data/.'. Then, you use the following command:

$ python train_gpu_0.py

✏️ Test

To eval LaneMapping on the other two test areas, you can use the following commands, and do not forget modify the corresponding datapath in config file:

$python test_gpu_0.py

✏️ Merge local lane map to the global map

Firstly, convert the predicted lanes on BEV to the LiDAR coordinate system.

$python ./baseline/utils/coor_img2pc.py

Then, merge the local lane map to the global lane map.

$python ./baseline/utils/merge_lines.py

💡 Citation

If you find this repo helpful, please give us a 😍 star 😍. Please consider citing LaneMapping if this program benefits your project

@article{MI2024104139,
title = {A benchmark approach and dataset for large-scale lane mapping from MLS point clouds},
journal = {International Journal of Applied Earth Observation and Geoinformation},
volume = {133},
pages = {104139},
year = {2024},
issn = {1569-8432},
doi = {https://doi.org/10.1016/j.jag.2024.104139},
url = {https://www.sciencedirect.com/science/article/pii/S156984322400493X},
author = {Xiaoxin Mi and Zhen Dong and Zhipeng Cao and Bisheng Yang and Zhen Cao and Chao Zheng and Jantien Stoter and Liangliang Nan}
}

🔗 Related Projects

We sincerely thank the excellent projects:

About

Large-scale lane mapping using MLS point clouds.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages