How Compact?: Assessing Compactness of Representations through Layer-Wise Pruning

@article{Jung2019HowCA,
  title={How Compact?: Assessing Compactness of Representations through Layer-Wise Pruning},
  author={Hyun-Joo Jung and Jaedeok Kim and Yoonsuck Choe},
  journal={ArXiv},
  year={2019},
  volume={abs/1901.02757}
}
Various forms of representations may arise in the many layers embedded in deep neural networks (DNNs). Of these, where can we find the most compact representation? We propose to use a pruning framework to answer this question: How compact can each layer be compressed, without losing performance? Most of the existing DNN compression methods do not consider the relative compressibility of the individual layers. They uniformly apply a single target sparsity to all layers or adapt layer sparsity… CONTINUE READING
1
Twitter Mention

Figures, Tables, Results, and Topics from this paper.

Key Quantitative Results

  • In case of VGG-16 model with weight pruning on the ImageNet dataset, we achieved up to 75% (17.5% on average) better top-5 accuracy than the baseline under the same total target sparsity.
  • Numerically, the proposed method achieved up to 58.9% better classification accuracy than the baseline using the same total target sparsity.

Similar Papers

References

Publications referenced by this paper.
SHOWING 1-10 OF 22 REFERENCES