Filter Distillation for Network Compression

@article{Suau2020FilterDF,
  title={Filter Distillation for Network Compression},
  author={Xavier Suau and L. Zappella and N. Apostoloff},
  journal={2020 IEEE Winter Conference on Applications of Computer Vision (WACV)},
  year={2020},
  pages={3129-3138}
}
  • Xavier Suau, L. Zappella, N. Apostoloff
  • Published 2020
  • Computer Science
  • 2020 IEEE Winter Conference on Applications of Computer Vision (WACV)
  • In this paper we introduce Principal Filter Analysis (PFA), an easy to use and effective method for neural network compression. PFA exploits the correlation between filter responses within network layers to recommend a smaller network that maintain as much as possible the accuracy of the full model. We propose two algorithms: the first allows users to target compression to specific network property, such as number of trainable variable (footprint), and produces a compressed model that satisfies… CONTINUE READING
    9 Citations
    Domain Adaptation Regularization for Spectral Pruning
    • PDF
    Scaling Up Exact Neural Network Compression by ReLU Stability
    • PDF
    Layer-Wise Data-Free CNN Compression
    • PDF
    HALO: Learning to Prune Neural Networks with Shrinkage.
    • PDF
    Now that I can see, I can improve: Enabling data-driven finetuning of CNNs on the edge
    • A. Rajagopal, C. Bouganis
    • Computer Science
    • 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW)
    • 2020
    • 1
    • PDF
    T-Basis: a Compact Representation for Neural Networks
    • 1
    • PDF
    Hierarchical Adaptive Lasso: Learning Sparse Neural Networks with Shrinkage via Single Stage Training

    References

    SHOWING 1-10 OF 58 REFERENCES
    Domain-Adaptive Deep Network Compression
    • 35
    • PDF
    Compression-aware Training of Deep Networks
    • 92
    • PDF
    Compressing Neural Networks using the Variational Information Bottleneck
    • 74
    • Highly Influential
    • PDF
    Deep Compression: Compressing Deep Neural Network with Pruning, Trained Quantization and Huffman Coding
    • 4,208
    • PDF
    CLIP-Q: Deep Network Compression Learning by In-parallel Pruning-Quantization
    • F. Tung, G. Mori
    • Computer Science
    • 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition
    • 2018
    • 81
    • PDF
    Compressing deep neural networks using a rank-constrained topology
    • 62
    • PDF
    ThiNet: A Filter Level Pruning Method for Deep Neural Network Compression
    • 756
    • PDF
    On Compressing Deep Models by Low Rank and Sparse Decomposition
    • 159
    • PDF
    Extreme Network Compression via Filter Group Approximation
    • 31
    • Highly Influential
    • PDF
    Speeding up Convolutional Neural Networks with Low Rank Expansions
    • 926
    • PDF