Skip to content

addisonwang2013/evdistill

Repository files navigation

EvDistill: Asynchronous Events to End-task Learning via Bidirectional Reconstruction-guided Cross-modal Knowledge Distillation (CVPR'21)

We have updated our paper by adding more references and corrected some typos. For the latest version, please refer to the Arxiv paper at https://arxiv.org/pdf/2111.12341.pdf.

Citation

If you find this resource helpful, please cite the paper as follows:

@article{wangevdistill,
  title={EvDistill: Asynchronous Events to End-task Learning via Bidirectional Reconstruction-guided Cross-modal Knowledge Distillation},
  author={Wang, Lin and Chae, Yujeong and Yoon, Sung-Hoon and Kim, Tae-Kyun and Yoon, Kuk-Jin},
  booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
  year={2021}
}
  • Segmentic segmentation on DDD17 dataset

image

  • Segmentic segmentation on MVSEC dataset

image

Setup

Download

git clone  https://github.com/addisonwang2013/evdistill/

Make your own environment

conda create -n myenv python=3.7
conda activate myenv

Install the requirements

cd evdistill

pip install -r requirements.txt

Download DDD17 test and MVSEC test dataset from this link: DDD17 test and MVSEC test

  • For DDD17 dataset, please put the dataset to ./dataset/ddd17/
  • For MVSEC dataset, please put the dataset to ./dataset/mvsec/

Download the pretrained models from this link: checkpoints

  • Put the checkpoints of event and aps segmentation networks for e.g. DDD17 dataset into ./res/

Modify the python configurations.py in the configs folder with the relevant paths to the test data and checkpoints

  • For the test data, e.g. DDD17, please assign the path to ./test_dir/ddd17/
  • For the checkpoint of aps nework, please assign the path to ./res/ddd17/ddd17_aps_ckpt.pth
  • For the checkpoint of event network, please assign the path to ./res/ddd17/ddd17_event_ckpt.pth

Test the MIoU of event and aps segmentation networks:

python test_evdistill.py

Visualize semantic segmentation results for both event and aps data:

python visualize.py

Note

In this work, for convenience, the event data are embedded and stored as multi-channel event images, which are the paired with the aps frames. It is also possible to directly feed event raw data after embedding to the student network directly with aps frames.

Acknowledgement

The skeleton code is inspired by Deeplab-v3-Plus

License

MIT

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages