MetaPruning: Meta Learning for Automatic Neural Network Channel Pruning

@article{Liu2019MetaPruningML,
  title={MetaPruning: Meta Learning for Automatic Neural Network Channel Pruning},
  author={Zechun Liu and Haoyuan Mu and Xiangyu Zhang and Zichao Guo and Xin Yang and K. Cheng and Jian Sun},
  journal={2019 IEEE/CVF International Conference on Computer Vision (ICCV)},
  year={2019},
  pages={3295-3304}
}
  • Zechun Liu, Haoyuan Mu, +4 authors Jian Sun
  • Published 2019
  • Computer Science
  • 2019 IEEE/CVF International Conference on Computer Vision (ICCV)
In this paper, we propose a novel meta learning approach for automatic channel pruning of very deep neural networks. [...] Key Method We use a simple stochastic structure sampling method for training the PruningNet. Then, we apply an evolutionary procedure to search for good-performing pruned networks. The search is highly efficient because the weights are directly generated by the trained PruningNet and we do not need any finetuning.Expand
MetaSelection: Metaheuristic Sub-Structure Selection for Neural Network Pruning Using Evolutionary Algorithm
TLDR
An effective metaheuristic sub-structure selection (MetaSelection) method for neural network pruning that can automatically achieve the pruning rate and channel selection at the same time instead of hand-crafted criteria in a cascaded way is proposed. Expand
DMCP: Differentiable Markov Channel Pruning for Neural Networks
TLDR
A novel differentiable method for channel pruning, named Differentiable Markov Channel Pruning (DMCP), to efficiently search the optimal sub-structure from unpruned networks, which can achieve consistent improvement than state-of-the-art pruning methods in various FLOPs settings. Expand
AdaPruner: Adaptive Channel Pruning and Effective Weights Inheritance
  • Xiangcheng Liu, Jian Cao, Hongyi Yao, Wenyu Sun, Yuan Zhang
  • Computer Science
  • ArXiv
  • 2021
TLDR
A pruning framework that adaptively determines the number of each layer’s channels as well as the wights inheritance criteria for sub-network, and AdaPruner allows to obtain pruned network quickly, accurately and efficiently, taking into account both the structure and initialization weights. Expand
Beyond Network Pruning: a Joint Search-and-Training Approach
TLDR
It is possible to expand the search space of networking pruning by associating each filter with a learnable weight and joint search-and-training can be conducted iteratively to maximize the learning efficiency. Expand
SuperPruner: Automatic Neural Network Pruning via Super Network
  • Yu Liu, Yong Wang, Haojin Qi, Xiaoming Ju
  • Computer Science
  • Sci. Program.
  • 2021
TLDR
This paper proposes an effective SuperPruner algorithm, which aims to find optimal pruned structure instead of pruning unimportant channels, and can achieve higher pruned ratio with less accuracy cost. Expand
EagleEye: Fast Sub-net Evaluation for Efficient Neural Network Pruning
TLDR
A pruning method called EagleEye is presented, in which a simple yet efficient evaluation component based on adaptive batch normalization is applied to unveil a strong correlation between different pruned DNN structures and their final settled accuracy. Expand
Rapid Structural Pruning of Neural Networks with Set-based Task-Adaptive Meta-Pruning
As deep neural networks are growing in size and being increasingly deployed to more resource-limited devices, there has been a recent surge of interest in network pruning methods, which aim to removeExpand
Pruning with Compensation: Efficient Channel Pruning for Deep Convolutional Neural Networks
  • Zhouyang Xie, Yan Fu, Shengzhao Tian, Junlin Zhou, Duanbing Chen
  • Computer Science
  • ArXiv
  • 2021
TLDR
A highly efficient pruning method is proposed to significantly reduce the cost of pruning DCNN and shows competitive pruning performance among the state-of-the-art retraining-based pruning methods and, more importantly, reduces the processing time and data usage. Expand
Soft Taylor Pruning for Accelerating Deep Convolutional Neural Networks
TLDR
A novel Gradient-based method, Soft Taylor Pruning (STP), is proposed to reduce the network complexity in dynamic way by allowing simultaneous pruning on multiple layers by controlling the opening and closing of multiple mask layers. Expand
Evolving Transferable Pruning Functions
TLDR
This work proposes an end-to-end framework to automatically discover strong pruning metrics and craft a novel design space for expressing pruning functions and leverage an evolution strategy, genetic programming, to evolve high-quality and transferable pruned functions. Expand
...
1
2
3
4
5
...

References

SHOWING 1-10 OF 69 REFERENCES
Pruning Convolutional Neural Networks for Resource Efficient Inference
TLDR
It is shown that pruning can lead to more than 10x theoretical (5x practical) reduction in adapted 3D-convolutional filters with a small drop in accuracy in a recurrent gesture classifier. Expand
Rethinking the Value of Network Pruning
TLDR
It is found that with optimal learning rate, the "winning ticket" initialization as used in Frankle & Carbin (2019) does not bring improvement over random initialization, and the need for more careful baseline evaluations in future research on structured pruning methods is suggested. Expand
Pruning Convolutional Neural Networks for Resource Efficient Transfer Learning
TLDR
A new criterion based on an efficient first-order Taylor expansion to approximate the absolute change in training cost induced by pruning a network component is proposed, demonstrating superior performance compared to other criteria, such as the norm of kernel weights or average feature map activation. Expand
Efficient Neural Architecture Search via Parameter Sharing
TLDR
Efficient Neural Architecture Search is a fast and inexpensive approach for automatic model design that establishes a new state-of-the-art among all methods without post-training processing and delivers strong empirical performances using much fewer GPU-hours. Expand
SMASH: One-Shot Model Architecture Search through HyperNetworks
TLDR
A technique to accelerate architecture selection by learning an auxiliary HyperNet that generates the weights of a main model conditioned on that model's architecture is proposed, achieving competitive performance with similarly-sized hand-designed networks. Expand
Soft Filter Pruning for Accelerating Deep Convolutional Neural Networks
TLDR
The proposed Soft Filter Pruning method enables the pruned filters to be updated when training the model after pruning, which has two advantages over previous works: larger model capacity and less dependence on the pre-trained model. Expand
Network Trimming: A Data-Driven Neuron Pruning Approach towards Efficient Deep Architectures
TLDR
This paper introduces network trimming which iteratively optimizes the network by pruning unimportant neurons based on analysis of their outputs on a large dataset, inspired by an observation that the outputs of a significant portion of neurons in a large network are mostly zero. Expand
Neural Architecture Search with Reinforcement Learning
TLDR
This paper uses a recurrent network to generate the model descriptions of neural networks and trains this RNN with reinforcement learning to maximize the expected accuracy of the generated architectures on a validation set. Expand
Channel Pruning for Accelerating Very Deep Neural Networks
  • Yihui He, X. Zhang, Jian Sun
  • Computer Science
  • 2017 IEEE International Conference on Computer Vision (ICCV)
  • 2017
TLDR
This paper proposes an iterative two-step algorithm to effectively prune each layer, by a LASSO regression based channel selection and least square reconstruction, and generalizes this algorithm to multi-layer and multi-branch cases. Expand
Learning both Weights and Connections for Efficient Neural Network
TLDR
A method to reduce the storage and computation required by neural networks by an order of magnitude without affecting their accuracy by learning only the important connections, and prunes redundant connections using a three-step method. Expand
...
1
2
3
4
5
...