Corpus ID: 16349374

Training deep neural networks with low precision multiplications

@article{Courbariaux2014TrainingDN,
  title={Training deep neural networks with low precision multiplications},
  author={Matthieu Courbariaux and Yoshua Bengio and J. David},
  journal={arXiv: Learning},
  year={2014}
}
Multipliers are the most space and power-hungry arithmetic operators of the digital implementation of deep neural networks. [...] Key Result For example, it is possible to train Maxout networks with 10 bits multiplications.Expand
398 Citations
Deep Neural Network Training without Multiplications
  • PDF
Low-Precision Floating-Point Schemes for Neural Network Training
  • 16
  • PDF
Hardware-software codesign of accurate, multiplier-free Deep Neural Networks
  • 49
  • PDF
Deep Learning with Limited Numerical Precision
  • 1,235
  • PDF
Low-Precision Batch-Normalized Activations
  • 8
  • PDF
Minimizing Power for Neural Network Training with Logarithm-Approximate Floating-Point Multiplier
  • Tai-Yu Cheng, J. Yu, M. Hashimoto
  • Computer Science
  • 2019 29th International Symposium on Power and Timing Modeling, Optimization and Simulation (PATMOS)
  • 2019
  • 1
Quantization of Constrained Processor Data Paths Applied to Convolutional Neural Networks
  • 3
  • PDF
...
1
2
3
4
5
...

References

SHOWING 1-10 OF 36 REFERENCES
Deep Learning with Limited Numerical Precision
  • 1,235
  • PDF
Improving the speed of neural networks on CPUs
  • 603
  • PDF
The Impact of Arithmetic Representation on Implementing MLP-BP on FPGAs: A Study
  • 147
Backpropagation without Multiplication
  • 25
  • PDF
ImageNet classification with deep convolutional neural networks
  • 62,461
  • PDF
DaDianNao: A Machine-Learning Supercomputer
  • Yunji Chen, Tao Luo, +8 authors O. Temam
  • Computer Science
  • 2014 47th Annual IEEE/ACM International Symposium on Microarchitecture
  • 2014
  • 921
  • PDF
A highly scalable Restricted Boltzmann Machine FPGA implementation
  • 68
  • PDF
DianNao: a small-footprint high-throughput accelerator for ubiquitous machine-learning
  • 1,047
  • PDF
A fixed point implementation of the backpropagation learning algorithm
  • 8
Stochastic Pooling for Regularization of Deep Convolutional Neural Networks
  • 731
  • PDF
...
1
2
3
4
...