Corpus ID: 207870482

Adaptive Precision Training: Quantify Back Propagation in Neural Networks with Fixed-point Numbers

@article{Zhang2019AdaptivePT,
  title={Adaptive Precision Training: Quantify Back Propagation in Neural Networks with Fixed-point Numbers},
  author={Xishan Zhang and Shaoli Liu and R. Zhang and C. Liu and Di Huang and Shi-yi Zhou and Jiaming Guo and Yu Kang and Q. Guo and Zidong Du and Yunji Chen},
  journal={ArXiv},
  year={2019},
  volume={abs/1911.00361}
}
Adaptive Precision Training: Quantify Back Propagation in Neural Networks with Fixed-point Numbers. Recent emerged quantization technique has been applied to inference of deep neural networks for fast and efficient execution. However, directly applying quantization in training can cause significant accuracy loss, thus remaining an open challenge. 
3 Citations
Adaptive Precision Training for Resource Constrained Devices
  • PDF

References

SHOWING 1-10 OF 40 REFERENCES
Deep Learning with Limited Numerical Precision
  • 1,241
  • PDF
Scalable Methods for 8-bit Training of Neural Networks
  • 103
  • PDF
PACT: Parameterized Clipping Activation for Quantized Neural Networks
  • 267
  • PDF
Quantization and Training of Neural Networks for Efficient Integer-Arithmetic-Only Inference
  • 768
  • PDF
Mixed Precision Training
  • 484
  • PDF
Lower Numerical Precision Deep Learning Inference and Training
  • 27
  • PDF
Differentiable Quantization of Deep Neural Networks
  • 19
  • PDF
...
1
2
3
4
...