Toward Compact ConvNets via Structure-Sparsity Regularized Filter Pruning

@article{Lin2020TowardCC,
  title={Toward Compact ConvNets via Structure-Sparsity Regularized Filter Pruning},
  author={Shaohui Lin and R. Ji and Yuchao Li and Cheng Deng and X. Li},
  journal={IEEE Transactions on Neural Networks and Learning Systems},
  year={2020},
  volume={31},
  pages={574-588}
}
  • Shaohui Lin, R. Ji, +2 authors X. Li
  • Published 2020
  • Computer Science, Medicine
  • IEEE Transactions on Neural Networks and Learning Systems
The success of convolutional neural networks (CNNs) in computer vision applications has been accompanied by a significant increase of computation and memory costs, which prohibits their usage on resource-limited environments, such as mobile systems or embedded devices. [...] Key Method Concretely, the proposed scheme incorporates two different regularizers of structured sparsity into the original objective function of filter pruning, which fully coordinates the global output and local pruning operations to…Expand
48 Citations
Towards Optimal Structured CNN Pruning via Generative Adversarial Learning
  • Shaohui Lin, R. Ji, +5 authors D. Doermann
  • Computer Science
  • 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)
  • 2019
  • 124
  • PDF
Dependency Aware Filter Pruning
  • PDF
Global Sparse Momentum SGD for Pruning Very Deep Neural Networks
  • 41
  • PDF
Centripetal SGD for Pruning Very Deep Convolutional Networks With Complicated Structure
  • 56
  • PDF
EDP: An Efficient Decomposition and Pruning Scheme for Convolutional Neural Network Compression.
  • 1
  • Highly Influenced
  • PDF
Lossless CNN Channel Pruning via Gradient Resetting and Convolutional Re-parameterization
  • 1
  • PDF
Pruning Blocks for CNN Compression and Acceleration via Online Ensemble Distillation
  • 5
  • PDF
Learning sparse deep neural networks using efficient structured projections on convex constraints for green AI
  • M. Barlaud, F. Guyard
  • Computer Science
  • 2020 25th International Conference on Pattern Recognition (ICPR)
  • 2021
Campfire: Compressible, Regularization-Free, Structured Sparse Training for Hardware Accelerators
  • 4
  • PDF
Neural Network Compression Via Sparse Optimization
  • 1
  • PDF
...
1
2
3
4
5
...

References

SHOWING 1-10 OF 76 REFERENCES
Accelerating Convolutional Networks via Global & Dynamic Filter Pruning
  • 98
  • PDF
Structured Pruning of Deep Convolutional Neural Networks
  • 355
  • PDF
Learning Structured Sparsity in Deep Neural Networks
  • 1,227
  • Highly Influential
  • PDF
Pruning Filters for Efficient ConvNets
  • 1,522
  • Highly Influential
  • PDF
ThiNet: A Filter Level Pruning Method for Deep Neural Network Compression
  • 812
  • Highly Influential
  • PDF
Learning Efficient Convolutional Networks through Network Slimming
  • 826
  • PDF
Convolutional neural networks with low-rank regularization
  • 259
  • PDF
Sparse Convolutional Neural Networks
  • 505
  • PDF
...
1
2
3
4
5
...