Explicit Loss-Error-Aware Quantization for Low-Bit Deep Neural Networks

@article{Zhou2018ExplicitLQ,
  title={Explicit Loss-Error-Aware Quantization for Low-Bit Deep Neural Networks},
  author={Aojun Zhou and Anbang Yao and Kuan Wang and Yurong Chen},
  journal={2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition},
  year={2018},
  pages={9426-9435}
}
Benefiting from tens of millions of hierarchically stacked learnable parameters, Deep Neural Networks (DNNs) have demonstrated overwhelming accuracy on a variety of artificial intelligence tasks. However reversely, the large size of DNN models lays a heavy burden on storage, computation and power consumption, which prohibits their deployments on the embedded and mobile systems. In this paper, we propose Explicit Loss-error-aware Quantization (ELQ), a new method that can train DNN models with… CONTINUE READING