Selecting optimal layer reduction factors for model reduction of deep neural networks

Abstract

Deep neural networks (DNN) achieve very good performance in many machine learning tasks, but are computationally very demanding. Hence, there is a growing interest on model reduction methods for DNN. Model reduction allows to reduce the number of computations needed to evaluate a trained DNN without a significant performance degradation. In this paper, we… (More)
DOI: 10.1109/ICASSP.2017.7952549

Topics

3 Figures and Tables

Cite this paper

@article{Mauch2017SelectingOL, title={Selecting optimal layer reduction factors for model reduction of deep neural networks}, author={Lukas Mauch and Bin Yang}, journal={2017 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)}, year={2017}, pages={2212-2216} }