Skip to content

Latest commit

 

History

History
67 lines (50 loc) · 3.23 KB

README.md

File metadata and controls

67 lines (50 loc) · 3.23 KB

APSNet: Amodal Panoptic Segmentation

APSNet is a top-down approach for amodal panoptic segmentation, where the goal is to concurrently predict the pixel-wise semantic segmentation labels of visible regions of "stuff" classes (e.g., road, sky, and so on), and instance segmentation labels of both the visible and occluded regions of "thing" classes (e.g., car, truck, etc).

Illustration of Amodal Panoptic Segmentation task

This repository contains the PyTorch implementation of our CVPR'2022 paper Amodal Panoptic Segmentation. The repository builds on EfficientPS, mmdetection, gen-efficientnet-pytorch, and pycls codebases.

Additionally, this repository includes an implementation of EfficientPS compatible with PyTorch version 1.12.

If you find this code useful for your research, we kindly ask you to consider citing our papers:

For Amodal Panoptic Segmentation:

@article{mohan2022amodal,
  title={Amodal panoptic segmentation},
  author={Mohan, Rohit and Valada, Abhinav},
  booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
  pages={21023--21032},
  year={2022}
}

For EfficientPS: Efficient Panoptic Segmentation:

@article{mohan2020efficientps,
  title={Efficientps: Efficient panoptic segmentation},
  author={Mohan, Rohit and Valada, Abhinav},
  journal={International Journal of Computer Vision (IJCV)},
  year={2021}
}

System Requirements

  • Linux
  • Python 3.9
  • PyTorch 1.12.1
  • CUDA 11
  • GCC 7 or 8

IMPORTANT NOTE: These requirements are not necessarily mandatory. However, we have only tested the code under the above settings and cannot provide support for other setups.

Installation

Please refer to the installation documentation for detailed instructions.

Dataset Preparation

Please refer to the dataset documentation for detailed instructions.

Usage

For detailed instructions on training, evaluation, and inference processes, please refer to the usage documentation.

Pre-Trained Models

Pre-trained models can be found in the model zoo.

Acknowledgements

We have used utility functions from other open-source projects. We espeicially thank the authors of:

Contacts

License

For academic usage, the code is released under the GPLv3 license. For any commercial purpose, please contact the authors.