Joint Multi-Dimension Pruning via Numerical Gradient Update

@article{Liu2021JointMP,
  title={Joint Multi-Dimension Pruning via Numerical Gradient Update},
  author={Zechun Liu and X. Zhang and Zhiqiang Shen and Yichen Wei and Kwang-Ting Cheng and Jian Sun},
  journal={IEEE Transactions on Image Processing},
  year={2021},
  volume={30},
  pages={8034-8045}
}
We present joint multi-dimension pruning (abbreviated as JointPruning), an effective method of pruning a network on three crucial aspects: spatial, depth and channel simultaneously. To tackle these three naturally different dimensions, we proposed a general framework by defining pruning as seeking the best pruning vector (i.e., the numerical value of layer-wise channel number, spatial size, depth) and construct a unique mapping from the pruning vector to the pruned network structures. Then we… 
Carrying out CNN Channel Pruning in a White Box
TLDR
This article conducts channel pruning in a white box for the first time that CNN interpretability theory is considered to guide channel pruned, and chooses to preserve channels contributing to most categories.
Vision Transformer Slimming: Multi-Dimension Searching in Continuous Optimization Space
TLDR
A pure vision transformer slimming (ViT-Slim) framework that can search a sub-structure from the original model end-to-end across multiple dimensions, including the input tokens, MHSA and MLP modules with state-of-the-art performance is introduced.
Network Amplification with Efficient MACs Allocation
TLDR
This paper proposes to enlarge the capacity of CNN models byained MACs allocation for the width, depth and resolution on the stage level by using a dynamic programming manner and achieves state-of-the-art accuracies.

References

SHOWING 1-10 OF 74 REFERENCES
Rethinking the Value of Network Pruning
TLDR
It is found that with optimal learning rate, the "winning ticket" initialization as used in Frankle & Carbin (2019) does not bring improvement over random initialization, and the need for more careful baseline evaluations in future research on structured pruning methods is suggested.
Importance Estimation for Neural Network Pruning
TLDR
A novel method that estimates the contribution of a neuron (filter) to the final loss and iteratively removes those with smaller scores and two variations of this method using the first and second-order Taylor expansions to approximate a filter's contribution are described.
Discrimination-aware Channel Pruning for Deep Neural Networks
TLDR
This work investigates a simple-yet-effective method, called discrimination-aware channel pruning, to choose those channels that really contribute to discriminative power and proposes a greedy algorithm to conduct channel selection and parameter optimization in an iterative way.
Accelerate CNN via Recursive Bayesian Pruning
TLDR
A new dropout-based measurement of redundancy, which facilitate the computation of posterior assuming inter-layer dependency, is introduced and a sparsity-inducing Dirac-like prior is derived which regularizes the distribution of the designed noise to automatically approximate the posterior.
MetaPruning: Meta Learning for Automatic Neural Network Channel Pruning
  • Zechun Liu, Haoyuan Mu, Jian Sun
  • Computer Science
    2019 IEEE/CVF International Conference on Computer Vision (ICCV)
  • 2019
TLDR
A novel meta learning approach for automatic channel pruning of very deep neural networks by training a PruningNet, a kind of meta network, which is able to generate weight parameters for any pruned structure given the target network.
Accelerating Convolutional Networks via Global & Dynamic Filter Pruning
TLDR
This paper proposes a novel global & dynamic pruning (GDP) scheme to prune redundant filters for CNN acceleration that achieves superior performance to accelerate several cutting-edge CNNs on the ILSVRC 2012 benchmark.
AutoPruner: An End-to-End Trainable Filter Pruning Method for Efficient Deep Model Inference
Training Quantized Neural Networks With a Full-Precision Auxiliary Module
TLDR
The proposed method achieves near lossless performance to the full-precision model by using a 4-bit detector, which is of great practical value, and evaluates the proposed method on image classification and object detection over various quantization approaches and show consistent performance increase.
Towards Compact CNNs via Collaborative Compression
TLDR
A Collaborative Compression scheme, which joints channel pruning and tensor decomposition to compress CNN models by simultaneously learning the model sparsity and low-rankness and proposes multi-step heuristic compression to remove redundant compression units step-by-step.
Variational Convolutional Neural Network Pruning
TLDR
Variational technique is introduced to estimate distribution of a newly proposed parameter, called channel saliency, based on which redundant channels can be removed from model via a simple criterion, and results in significant size reduction and computation saving.
...
...