Skip to content

Latest commit

 

History

History
25 lines (18 loc) · 1.09 KB

README.md

File metadata and controls

25 lines (18 loc) · 1.09 KB

Compress networks using PyTorch - Pruning and Quantization

This is a complete training example for Deep Convolutional Networks on ImageNet.

Currently, the compression methods based on several techniques below:

  • Taylor Expansion (A good summary of this approach can be found here).
  • Attention Transfer from paper "Paying More Attention to Attention: Improving the Performance of Convolutional Neural Networks via Attention Transfer" (CVPR 2017)
  • Knowledge Distillation from paper "Distilling the Knowledge in a Neural Network" (NIPS 2014)
  • Quantization (grab some code from here)

Dependencies

  • Python 3.6.3
  • Pytorch 1.0

To clone:

git clone https://github.com/Yifan122/network_compress

example for pruning resnet network:

python resnet_prune.py --train_path /home/to/imagenet/taining/dataset --val_path /home/to/imagenet/validation/dataset