Automated Pruning for Deep Neural Network Compression

@article{Manessi2017AutomatedPF,
  title={Automated Pruning for Deep Neural Network Compression},
  author={Franco Manessi and Alessandro Rozza and Simone Bianco and Paolo Napoletano and Raimondo Schettini},
  journal={2018 24th International Conference on Pattern Recognition (ICPR)},
  year={2017},
  pages={657-664}
}
In this work we present a method to improve the pruning step of the current state-of-the-art methodology to compress neural networks. The novelty of the proposed pruning technique is in its differentiability, which allows pruning to be performed during the backpropagation phase of the network training. This enables an end-to-end learning and strongly reduces the training time. The technique is based on a family of differentiable pruning functions and a new regularizer specifically designed to… CONTINUE READING
12
Twitter Mentions

Similar Papers

Figures, Tables, Results, and Topics from this paper.

Key Quantitative Results

  • The experimental results show that the joint optimization of both the thresholds and the network weights permits to reach a higher compression rate, reducing the number of weights of the pruned network by a further 14% to 33 % compared to the current state-of-the-art.

Citations

Publications citing this paper.
SHOWING 1-5 OF 5 CITATIONS

References

Publications referenced by this paper.
SHOWING 1-10 OF 44 REFERENCES

Smooth object retrieval using a bag of boundaries

  • 2011 International Conference on Computer Vision
  • 2011
VIEW 5 EXCERPTS
HIGHLY INFLUENTIAL

Scalable Recognition with a Vocabulary Tree

  • 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'06)
  • 2006
VIEW 4 EXCERPTS
HIGHLY INFLUENTIAL

CNN Features Off-the-Shelf: An Astounding Baseline for Recognition

  • 2014 IEEE Conference on Computer Vision and Pattern Recognition Workshops
  • 2014
VIEW 5 EXCERPTS
HIGHLY INFLUENTIAL