Skip to content

Latest commit

 

History

History
71 lines (47 loc) · 2.19 KB

README.md

File metadata and controls

71 lines (47 loc) · 2.19 KB

CSRNet-pytorch

This is the PyTorch version repo for CSRNet: Dilated Convolutional Neural Networks for Understanding the Highly Congested Scenes in CVPR 2018, which delivered a state-of-the-art, straightforward and end-to-end architecture for crowd counting tasks.

Datasets

ShanghaiTech Dataset: Google Drive

Prerequisites

We strongly recommend Anaconda as the environment.

Python: 2.7

PyTorch: 0.4.0

CUDA: 9.2

Installed Libraries

conda create -y -n py27 python=2.7

conda activate py27

conda install pytorch=0.4.0 cuda92 -c pytorch

conda install torchvision==0.2.2 -c pytorch

conda install h5py

pip install opencv-python==3.4.2.16

conda install numba

Ground Truth

Please follow the make_dataset.ipynb to generate the ground truth. It shall take some time to generate the dynamic ground truth. Note you need to generate your own json file.

Training Process

Try python train.py train.json val.json 0 0 to start training process.

Validation

Follow the val.ipynb to try the validation. You can try to modify the notebook and see the output of each image.

Results

ShanghaiA MAE: 66.4 Google Drive ShanghaiB MAE: 10.6 Google Drive

References

If you find the CSRNet useful, please cite our paper. Thank you!

@inproceedings{li2018csrnet,
  title={CSRNet: Dilated convolutional neural networks for understanding the highly congested scenes},
  author={Li, Yuhong and Zhang, Xiaofan and Chen, Deming},
  booktitle={Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition},
  pages={1091--1100},
  year={2018}
}

Please cite the Shanghai datasets and other works if you use them.

@inproceedings{zhang2016single,
  title={Single-image crowd counting via multi-column convolutional neural network},
  author={Zhang, Yingying and Zhou, Desen and Chen, Siqin and Gao, Shenghua and Ma, Yi},
  booktitle={Proceedings of the IEEE conference on computer vision and pattern recognition},
  pages={589--597},
  year={2016}
}