Code for "SecondPose: SE(3)-Consistent Dual-Stream Feature Fusion for Category-Level Pose Estimation", Preprint. [Arxiv]
The code has been tested with
- python 3.9
- pytorch 1.13.0
- CUDA 11.6
Other dependencies:
requirements.txt
Setup env:
conda create -n secondpose python=3.9
pip install torch==1.13.0+cu116 torchvision==0.14.0+cu116 torchaudio==0.13.0 --extra-index-url https://download.pytorch.org/whl/cu116
cd lib/pointnet2/
pip install .
cd ../sphericalmap_utils/
pip install .
cd ../../
pip install -r requirements.txt
pip install open3d
- Please refer to the work of Self-DPDN.
- run data_preprocess.py
Train SecondPose for rotation estimation:
python train_geodino.py --gpus 0 --mod r
Train the network of pointnet++ for translation and size estimation:
python train.py --gpus 0 --mod ts
To test the model, please run:
python test_geodino.py --gpus 0 --test_epoch [YOUR EPOCH]
https://drive.google.com/file/d/1IafKC84XstivhUkm-4rF15KsSHaeBbic/view?usp=sharing
Our implementation leverages the code from VI-Net,
Our code is released under MIT License (see LICENSE file for details).