Jiao Zhan, Chi Guo, Bohan Yang, Yarong Luo, Yejun Wu, Jingyi Deng and Jingnan Liu
Table of Contents
2024-2-16
: We've uploaded some code, and the full code will be released soon!
conda create -n MAS-Net python=3.8 -y
source activate MAS-Net
conda install pytorch==1.12.1 torchvision==0.13.1 torchaudio==0.12.1 cudatoolkit=11.3 -c pytorch
git clone https://github.com/jiaoZ7688/MAS-Net
cd MAS-Net/
pip install -r requirements.txt
python3 setup.py build develop
(!!! Detectron2 must be installed successfully !!!)
Download the Images from KITTI dataset.
The Amodal Annotations could be found at KINS dataset
The D2S Amodal dataset could be found at mvtec-d2sa.
The COCOA dataset annotation from here (reference from github.com/YihongSun/Bayesian-Amodal) The images of COCOA dataset is the train2014 and val2014 of COCO dataset.
MAS-Net support datasets as coco format. It can be as follow (not necessarily the same as it depends on register data code)
KINS/
|--train_imgs
|--test_imgs/
|--annotations/
|----train.json
|----test.json
Then, See here for more details on data registration
After registering, run the preprocessing scripts to generate occluder mask annotation, for example:
python -m detectron2.data.datasets.process_data_amodal \
/path/to/KINS/train.json \
/path/to/KINS/train_imgs \
kins_dataset_train
the expected new annotation can be as follow:
KINS/
|--train_imgs
|--test_imgs/
|--annotations/
|----train.json
|----train_amodal.json
|----test.json
Configuration files for training MAS-Net on each datasets are available here.
To train, test and run demo, see the example scripts at scripts/
:
- MAS-Net R50 on KINS (here). Extraction code:vl54
- MAS-Net R50 on D2SA (TBA)
- MAS-Net R50 on COCOA-cls (TBA)
- This code utilize AISFormer as the basis.
- This code utilize BCNet for dataset mapping with occluder, VRSP-Net for amodal evalutation, and detectron2 as entire pipeline with Mask R-CNN meta arch.