Training Quantized Nets: A Deeper Understanding

Currently, deep neural networks are deployed on low-power embedded devices by first training a full-precision model using powerful computing hardware, and then deriving a corresponding low-precision model for efficient inference on such systems. However, training models directly with coarsely quantized weights is a key step towards learning on embedded… CONTINUE READING

7 Figures & Tables



Citations per Year

Citation Velocity: 20

Averaging 20 citations per year over the last 2 years.

Learn more about how we calculate this metric in our FAQ.