Frustum ConvNet: Sliding Frustums to Aggregate Local Point-Wise Features for Amodal 3D Object Detection
This repository is the code for Frustum ConvNet with aiMotive dataset.
The citation for the original paper (IROS 2019 paper [arXiv],[IEEEXplore]) is as follows:
@inproceedings{wang2019frustum,
title={Frustum ConvNet: Sliding Frustums to Aggregate Local Point-Wise Features for Amodal 3D Object Detection},
author={Wang, Zhixin and Jia, Kui},
booktitle={2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)},
pages={1742--1749},
year={2019},
organization={IEEE}
}
You can create the envirionment with docker.
- Environment
- Ubuntu 22.04
- CUDA 11.7
- pytorch==1.13.0+cu117
- RTX 3060
git clone https://github.com/tier4/frustum-convnet.git
cd frustum-convnet
make build_docker
make run_docker
cd ops
bash clean.sh
bash make.sh
Download the aiMotive dataset from here and organize them as follows.
aimotive/data/train
├── highway
├── night
├── rain
├── urban
Convert aimotive to kitti format
cd frustum-convnet
python3 aimotive/adapter.py
python3 kitti/spliter.py
If you don't install dvc, run pip install dvc[gdrive]
.
cd frustum-convnet_aiMotive
dvc pull
python3 kitti/prepare_data.py --gen_train --gen_val --gen_val_rgb_detection --gen_vis_rgb_detection
python3 train/train_net_det.py --cfg cfgs/det_aimotive.yaml OUTPUT_DIR output/aimotive_train
(nohup python3 train/train_net_det.py --cfg cfgs/det_aimotive.yaml OUTPUT_DIR output/aimotive_train > nohup-train.out & #remote)
python3 visualization/make_dataset.py #for full length visualization
python visualization/visualization.py
The official Frustum ConvNet code is available here.
Part of the code was adapted from F-PointNets.
The official Frustum ConvNet is released under MIT license.