SYQ: Learning Symmetric Quantization for Efficient Deep Neural Networks

@inproceedings{Faraone2018SYQLS,
  title={SYQ: Learning Symmetric Quantization for Efficient Deep Neural Networks},
  author={Julian Faraone and Nicholas J. Fraser and Michaela Blott and Philip Heng Wai Leong},
  booktitle={CVPR},
  year={2018}
}
Inference for state-of-the-art deep neural networks is computationally expensive, making them difficult to deploy on constrained hardware environments. An efficient way to reduce this complexity is to quantize the weight parameters and/or activations during training by approximating their distributions with a limited entry codebook. For very low-precisions, such as binary or ternary networks with 1-8-bit activations, the information loss from quantization leads to significant accuracy… CONTINUE READING
Recent Discussions
This paper has been referenced on Twitter 14 times over the past 90 days. VIEW TWEETS

References

Publications referenced by this paper.
Showing 1-10 of 34 references

Similar Papers

Loading similar papers…