Manifold Regularized Dynamic Network Pruning

@article{Tang2021ManifoldRD,
  title={Manifold Regularized Dynamic Network Pruning},
  author={Yehui Tang and Yunhe Wang and Yixing Xu and Yiping Deng and Chao Xu and Dacheng Tao and Chang Xu},
  journal={2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
  year={2021},
  pages={5016-5026}
}
  • Yehui Tang, Yunhe Wang, +4 authors Chang Xu
  • Published 10 March 2021
  • Computer Science
  • 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)
Neural network pruning is an essential approach for reducing the computational complexity of deep models so that they can be well deployed on resource-limited devices. Compared with conventional methods, the recently developed dynamic pruning methods determine redundant filters variant to each input instance which achieves higher acceleration. Most of the existing methods discover effective subnetworks for each instance independently and do not utilize the relationship between different inputs… Expand
AIP: Adversarial Iterative Pruning Based on Knowledge Transfer for Convolutional Neural Networks
TLDR
A novel adversarial iterative pruning method for CNNs based on knowledge transfer that is superior to some state-of-the-art pruning schemes in terms of compressing rate and accuracy and has good generalization on the object detection task PASCAL VOC. Expand
Dynamic Resolution Network
TLDR
This paper proposes a novel dynamic-resolution network (DRNet) in which the input resolution is determined dynamically based on each input sample, and learns the smallest resolution that can retain and even exceed the original recognition accuracy for each image. Expand
Dynamical Conventional Neural Network Channel Pruning by Genetic Wavelet Channel Search for Image Classification
  • Lin Chen, Saijun Gong, Xiaoyu Shi, Mingsheng Shang
  • Medicine, Computer Science
  • Frontiers in Computational Neuroscience
  • 2021
TLDR
A genetic wavelet channel search (GWCS) based pruning framework, where the pruning process is modeled as a multi-stage genetic optimization procedure, which demonstrates that GNAS outperforms state-of-the-art pruning algorithms in both accuracy and compression rate. Expand
CDP: Towards Optimal Filter Pruning via Class-wise Discriminative Power
TLDR
This is the first work that prunes neural networks through class-wise discriminative power and measures such power by introducing TF-IDF in feature representation among different classes by employing the widely-used Term Frequency-Inverse Document Frequency on feature representations across classes. Expand
Pruning with Compensation: Efficient Channel Pruning for Deep Convolutional Neural Networks
  • Zhouyang Xie, Yan Fu, Shengzhao Tian, Junlin Zhou, Duanbing Chen
  • Computer Science
  • ArXiv
  • 2021
TLDR
A highly efficient pruning method is proposed to significantly reduce the cost of pruning DCNN and shows competitive pruning performance among the state-of-the-art retraining-based pruning methods and, more importantly, reduces the processing time and data usage. Expand
CHIP: CHannel Independence-based Pruning for Compact Neural Networks
TLDR
This paper proposes to perform efficient filter pruning using channel independence, a metric that measures the correlations among different feature maps, so that the less independent feature map is interpreted as containing less useful information/knowledge, and hence its corresponding filter can be pruned without affecting model capacity. Expand
RED++ : Data-Free Pruning of Deep Neural Networks via Input Splitting and Output Merging
TLDR
This paper proposes a novel data-free pruning technique of DNN layers which removes the input-wise redundant operations and offers novel perspective on DNN pruning by shifting the burden of large computation to efficient memory access and allocation. Expand
Distilling Object Detectors via Decoupled Features
  • Jianyuan Guo, Kai Han, +4 authors Chang Xu
  • Computer Science
  • 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)
  • 2021
TLDR
This paper presents a novel distillation algorithm via decoupled features (DeFeat) for learning a better student detector that is able to surpass the state-of-the-art distillation methods for object detection. Expand
Learning Frequency Domain Approximation for Binary Neural Networks
TLDR
This work proposes to estimate the gradient of sign function in the Fourier frequency domain using the combination of sine functions for training BNNs, namely frequency domain approximation (FDA), which achieves the state-of-the-art accuracy. Expand
Transformer in Transformer
TLDR
It is pointed out that the attention inside these local patches are also essential for building visual transformers with high performance and a new architecture, namely, Transformer iN Transformer (TNT), is explored. Expand

References

SHOWING 1-10 OF 65 REFERENCES
Learning Instance-wise Sparsity for Accelerating Deep Models
TLDR
This work expects intermediate feature maps of each instance in deep neural networks to be sparse while preserving the overall network performance, and takes coefficient of variation as a measure to select the layers that are appropriate for acceleration. Expand
HRank: Filter Pruning Using High-Rank Feature Map
TLDR
This paper proposes a novel filter pruning method by exploring the High Rank of feature maps (HRank), inspired by the discovery that the average rank of multiple feature maps generated by a single filter is always the same, regardless of the number of image batches CNNs receive. Expand
Importance Estimation for Neural Network Pruning
TLDR
A novel method that estimates the contribution of a neuron (filter) to the final loss and iteratively removes those with smaller scores and two variations of this method using the first and second-order Taylor expansions to approximate a filter's contribution are described. Expand
Accelerating Convolutional Networks via Global & Dynamic Filter Pruning
TLDR
This paper proposes a novel global & dynamic pruning (GDP) scheme to prune redundant filters for CNN acceleration that achieves superior performance to accelerate several cutting-edge CNNs on the ILSVRC 2012 benchmark. Expand
Provable Filter Pruning for Efficient Neural Networks
TLDR
This work presents a provable, sampling-based approach for generating compact Convolutional Neural Networks by identifying and removing redundant filters from an over-parameterized network and constructs an importance sampling distribution where filters that highly affect the output are sampled with correspondingly high probability. Expand
Filter Pruning via Geometric Median for Deep Convolutional Neural Networks Acceleration
TLDR
Unlike previous methods, FPGM compresses CNN models by pruning filters with redundancy, rather than those with“relatively less” importance, and when applied to two image classification benchmarks, the method validates its usefulness and strengths. Expand
Soft Filter Pruning for Accelerating Deep Convolutional Neural Networks
TLDR
The proposed Soft Filter Pruning (SFP) method enables the pruned filters to be updated when training the model after pruning, which has two advantages over previous works: larger model capacity and less dependence on the pretrained model. Expand
Discrete Model Compression With Resource Constraint for Deep Neural Networks
TLDR
An efficient discrete optimization method to directly optimize channel-wise differentiable discrete gate under resource constraint while freezing all the other model parameters, which is globally discrimination-aware due to the discrete setting. Expand
Discrimination-aware Channel Pruning for Deep Neural Networks
TLDR
This work investigates a simple-yet-effective method, called discrimination-aware channel pruning, to choose those channels that really contribute to discriminative power and proposes a greedy algorithm to conduct channel selection and parameter optimization in an iterative way. Expand
Towards Optimal Structured CNN Pruning via Generative Adversarial Learning
  • Shaohui Lin, R. Ji, +5 authors D. Doermann
  • Computer Science
  • 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)
  • 2019
TLDR
This paper proposes an effective structured pruning approach that jointly prunes filters as well as other structures in an end-to-end manner and effectively solves the optimization problem by generative adversarial learning (GAL), which learns a sparse soft mask in a label-free and an end to end manner. Expand
...
1
2
3
4
5
...