Skip to content

Latest commit

 

History

History
213 lines (190 loc) · 15.1 KB

README.md

File metadata and controls

213 lines (190 loc) · 15.1 KB

PWC PWC PWC PWC PWC

ResNeSt (Detectron2 Wrapper)

Code for detection and instance segmentation experiments in ResNeSt.

Training and Inference

Please follow INSTALL.md to install detectron2.

To train a model with 8 gpus, please run

python train_net.py  --num-gpus 8 --config-file your_config.yaml

For inference

python train_net.py  \
    --config-file your_config.yaml
    --eval-only MODEL.WEIGHTS /path/to/checkpoint_file

For the inference demo, please see GETTING_STARTED.md.

Pretrained Models

Object Detection

Method Backbone mAP% download
Faster R-CNN ResNet-50 39.25 config | model | log
ResNet-101 41.37 config | model | log
ResNeSt-50 (ours) 42.33 config | model | log
ResNeSt-50-DCNv2 (ours) 44.11 config | model | log
ResNeSt-101 (ours) 44.72 config | model | log
Cascade R-CNN ResNet-50 42.52 config | model | log
ResNet-101 44.03 config | model | log
ResNeSt-50 (ours) 45.41 config | model | log
ResNeSt-101 (ours) 47.50 config | model | log
ResNeSt-200 (ours) 49.03 config | model | log

We train all models with FPN, SyncBN and image scale augmentation (short size of a image is pickedrandomly from 640 to 800). 1x learning rate schedule is used. All of them are reported on COCO-2017 validation dataset.

Instance Segmentation

Method Backbone bbox mask download
Mask R-CNN ResNet-50 39.97 36.05 config | model | log
ResNet-101 41.78 37.51 config | model | log
ResNeSt-50 (ours) 42.81 38.14 config | model | log
ResNeSt-101 (ours) 45.75 40.65 config | model | log
Cascade R-CNN ResNet-50 43.06 37.19 config | model | log
ResNet-101 44.79 38.52 config | model | log
ResNeSt-50 (ours) 46.19 39.55 config | model | log
ResNeSt-101 (ours) 48.30 41.56 config | model | log
ResNeSt-200-tricks-3x (ours) 50.54 44.21 config | model | log
ResNeSt-200-dcn-tricks-3x (ours) 50.91 44.50 config | model | log
53.30* 47.10*

All models are trained along with FPN and SyncBN. For data augmentation,input images’ shorter side are randomly scaled to one of (640, 672, 704, 736, 768, 800). 1x learning rate schedule is used, if not otherwise specified. All of them are reported on COCO-2017 validation dataset. The values with * demonstrate the mutli-scale testing performance on the test-dev2019.

Panoptic Segmentation

Backbone bbox mask PQ download
ResNeSt-200 51.00 43.68 47.90 config | model | log

Reference

ResNeSt: Split-Attention Networks [arXiv]

Hang Zhang, Chongruo Wu, Zhongyue Zhang, Yi Zhu, Zhi Zhang, Haibin Lin, Yue Sun, Tong He, Jonas Muller, R. Manmatha, Mu Li and Alex Smola

@article{zhang2020resnest,
title={ResNeSt: Split-Attention Networks},
author={Zhang, Hang and Wu, Chongruo and Zhang, Zhongyue and Zhu, Yi and Zhang, Zhi and Lin, Haibin and Sun, Yue and He, Tong and Muller, Jonas and Manmatha, R. and Li, Mu and Smola, Alexander},
journal={arXiv preprint arXiv:2004.08955},
year={2020}
}

Contributors

Chongruo Wu, Zhongyue Zhang, Hang Zhang