Loss-aware Weight Quantization of Deep Networks

@article{Hou2018LossawareWQ,
  title={Loss-aware Weight Quantization of Deep Networks},
  author={Lu Hou and James T. Kwok},
  journal={CoRR},
  year={2018},
  volume={abs/1802.08635}
}
The huge size of deep networks hinders their use in small computing devices. In this paper, we consider compressing the network by weight quantization. We extend a recently proposed loss-aware weight binarization scheme to ternarization, with possibly different scaling parameters for the positive and negative weights, and m-bit (where m > 2) quantization. Experiments on feedforward and recurrent neural networks show that the proposed scheme outperforms state-of-the-art weight quantization… CONTINUE READING
Recent Discussions
This paper has been referenced on Twitter 9 times over the past 90 days. VIEW TWEETS
6 Citations
36 References
Similar Papers

References

Publications referenced by this paper.
Showing 1-10 of 36 references

Similar Papers

Loading similar papers…