NISP: Pruning Networks Using Neuron Importance Score Propagation

@article{Yu2018NISPPN,
  title={NISP: Pruning Networks Using Neuron Importance Score Propagation},
  author={Ruichi Yu and Ang Li and C. Chen and Jui-Hsin Lai and Vlad I. Morariu and Xintong Han and Mingfei Gao and Ching-Yung Lin and L. Davis},
  journal={2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition},
  year={2018},
  pages={9194-9203}
}
  • Ruichi Yu, Ang Li, +6 authors L. Davis
  • Published 2018
  • Computer Science
  • 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition
To reduce the significant redundancy in deep Convolutional Neural Networks (CNNs), most existing methods prune neurons by only considering the statistics of an individual layer or two consecutive layers (e.g., prune one layer to minimize the reconstruction error of the next layer), ignoring the effect of error propagation in deep networks. In contrast, we argue that for a pruned network to retain its predictive power, it is essential to prune neurons in the entire neuron network jointly based… Expand
330 Citations
Gradual Channel Pruning While Training Using Feature Relevance Scores for Convolutional Neural Networks
  • 3
  • PDF
Importance Estimation for Neural Network Pruning
  • 131
  • PDF
Neuron Merging: Compensating for Pruned Neurons
  • Highly Influenced
  • PDF
Where to Prune: Using LSTM to Guide Data-Dependent Soft Pruning
  • 1
Redundancy-Aware Pruning of Convolutional Neural Networks
  • Guotian Xie
  • Computer Science, Medicine
  • Neural Computation
  • 2020
  • Highly Influenced
Asymptotic Soft Filter Pruning for Deep Convolutional Neural Networks
  • 19
  • PDF
Discriminative Layer Pruning for Convolutional Neural Networks
  • 5
  • Highly Influenced
...
1
2
3
4
5
...

References

SHOWING 1-10 OF 51 REFERENCES
Pruning Convolutional Neural Networks for Resource Efficient Inference
  • 676
Pruning Convolutional Neural Networks for Resource Efficient Transfer Learning
  • 276
  • PDF
ThiNet: A Filter Level Pruning Method for Deep Neural Network Compression
  • 778
  • Highly Influential
  • PDF
Data-free Parameter Pruning for Deep Neural Networks
  • 296
  • PDF
Pruning Filters for Efficient ConvNets
  • 1,452
  • Highly Influential
  • PDF
An Exploration of Parameter Redundancy in Deep Networks with Circulant Projections
  • 228
  • PDF
BlockDrop: Dynamic Inference Paths in Residual Networks
  • 188
  • PDF
Learning Structured Sparsity in Deep Neural Networks
  • 1,192
  • PDF
Learning Efficient Convolutional Networks through Network Slimming
  • 781
  • PDF
Learning the Architecture of Deep Neural Networks
  • 10
  • PDF
...
1
2
3
4
5
...