Learning to Prune in Training via Dynamic Channel Propagation

@article{Shen2021LearningTP,
  title={Learning to Prune in Training via Dynamic Channel Propagation},
  author={Shibo Shen and Rongpeng Li and Zhifeng Zhao and Honggang Zhang and Yugeng Zhou},
  journal={2020 25th International Conference on Pattern Recognition (ICPR)},
  year={2021},
  pages={939-945}
}
  • Shibo Shen, Rongpeng Li, +2 authors Yugeng Zhou
  • Published 3 July 2020
  • Computer Science, Mathematics
  • 2020 25th International Conference on Pattern Recognition (ICPR)
In this paper, we propose a novel network training mechanism called “dynamic channel propagation” to prune the neural networks during the training period. In particular, we pick up a specific group of channels in each convolutional layer to participate in the forward propagation in training time according to the significance level of channel, which is defined as channel utility. The utility values with respect to all selected channels are updated simultaneously with the error back-propagation… 
STAMP: Simultaneous Training and Model Pruning for Low Data Regimes in Medical Image Segmentation
TLDR
The STAMP algorithm is developed to allow the simultaneous training and pruning of a UNet architecture for medical image segmentation with targeted channelwise dropout to make the network robust to the pruning.

References

SHOWING 1-10 OF 28 REFERENCES
Weighted Channel Dropout for Regularization of Deep Convolutional Neural Network
TLDR
Weighted Channel Dropout is totally parameter-free and deployed only in training phase with very slight computation cost, when combining with the existing networks, it requires no re-pretraining on ImageNet and thus is well-suited for the application on small datasets.
Channel Pruning for Accelerating Very Deep Neural Networks
  • Yihui He, X. Zhang, Jian Sun
  • Computer Science
    2017 IEEE International Conference on Computer Vision (ICCV)
  • 2017
TLDR
This paper proposes an iterative two-step algorithm to effectively prune each layer, by a LASSO regression based channel selection and least square reconstruction, and generalizes this algorithm to multi-layer and multi-branch cases.
Dynamic Channel Pruning: Feature Boosting and Suppression
TLDR
This paper proposes feature boosting and suppression (FBS), a new method to predictively amplify salient convolutional channels and skip unimportant ones at run-time, and compares FBS to a range of existing channel pruning and dynamic execution schemes and demonstrates large improvements on ImageNet classification.
Learning Efficient Convolutional Networks through Network Slimming
TLDR
The approach is called network slimming, which takes wide and large networks as input models, but during training insignificant channels are automatically identified and pruned afterwards, yielding thin and compact models with comparable accuracy.
ThiNet: A Filter Level Pruning Method for Deep Neural Network Compression
TLDR
ThiNet is proposed, an efficient and unified framework to simultaneously accelerate and compress CNN models in both training and inference stages, and it is revealed that it needs to prune filters based on statistics information computed from its next layer, not the current layer, which differentiates ThiNet from existing methods.
Soft Filter Pruning for Accelerating Deep Convolutional Neural Networks
TLDR
The proposed Soft Filter Pruning (SFP) method enables the pruned filters to be updated when training the model after pruning, which has two advantages over previous works: larger model capacity and less dependence on the pretrained model.
Runtime Neural Pruning
TLDR
A Runtime Neural Pruning (RNP) framework which prunes the deep neural network dynamically at the runtime and preserves the full ability of the original network and conducts pruning according to the input image and current feature maps adaptively.
COP: Customized Deep Model Compression via Regularized Correlation-Based Filter-Level Pruning
TLDR
A novel algorithm named COP (correlation-based pruning) is developed, which can detect the redundant filters efficiently and enable the cross-layer filter comparison through global normalization, and adds parameter-quantity and computational-cost regularization terms to the importance.
Discrimination-aware Channel Pruning for Deep Neural Networks
TLDR
This work investigates a simple-yet-effective method, called discrimination-aware channel pruning, to choose those channels that really contribute to discriminative power and proposes a greedy algorithm to conduct channel selection and parameter optimization in an iterative way.
Pruning Convolutional Neural Networks for Resource Efficient Inference
TLDR
It is shown that pruning can lead to more than 10x theoretical (5x practical) reduction in adapted 3D-convolutional filters with a small drop in accuracy in a recurrent gesture classifier.
...
1
2
3
...