Play and Prune: Adaptive Filter Pruning for Deep Model Compression

@article{Singh2019PlayAP,
  title={Play and Prune: Adaptive Filter Pruning for Deep Model Compression},
  author={Pravendra Singh and V. Verma and P. Rai and Vinay P. Namboodiri},
  journal={ArXiv},
  year={2019},
  volume={abs/1905.04446}
}
  • Pravendra Singh, V. Verma, +1 author Vinay P. Namboodiri
  • Published 2019
  • Computer Science
  • ArXiv
  • While convolutional neural networks (CNN) have achieved impressive performance on various classification/recognition tasks, they typically consist of a massive number of parameters. This results in significant memory requirement as well as computational overheads. Consequently, there is a growing need for filter-level pruning approaches for compressing CNN based models that not only reduce the total number of parameters but reduce the overall computation as well. We present a new min-max… CONTINUE READING
    Leveraging Filter Correlations for Deep Model Compression
    15
    A "Network Pruning Network" Approach to Deep Model Compression
    Channel Pruning via Automatic Structure Search
    1
    Filter Sketch for Network Pruning
    2
    A Survey of Pruning Methods for Efficient Person Re-identification Across Domains
    2
    Localization-aware Channel Pruning for Object Detection
    EDCompress: Energy-Aware Model Compression with Dataflow

    References

    Publications referenced by this paper.
    SHOWING 1-10 OF 32 REFERENCES
    An Entropy-based Pruning Method for CNN Compression
    45
    Runtime Neural Pruning
    152
    NISP: Pruning Networks Using Neuron Importance Score Propagation
    230
    Less Is More: Towards Compact CNNs
    176
    Channel Pruning for Accelerating Very Deep Neural Networks
    734