Corpus ID: 26450018

Low precision arithmetic for deep learning

@article{Courbariaux2015LowPA,
  title={Low precision arithmetic for deep learning},
  author={Matthieu Courbariaux and Yoshua Bengio and Jean-Pierre David},
  journal={CoRR},
  year={2015},
  volume={abs/1412.7024}
}
We simulate the training of a set of state of the art neural networks, the Maxout networks (Goodfellow et al., 2013a), on three benchmark datasets: the MNIST, CIFAR10 and SVHN, with three distinct arithmetics: floating point, fixed point and dynamic fixed point. For each of those datasets and for each of those arithmetics, we assess the impact of the precision of the computations on the final error of the training. We find that very low precision computation is sufficient not just for running… Expand
Quantization Error as a Metric for Dynamic Precision Scaling in Neural Net Training
TLDR
A novel dynamic precision scaling (DPS) scheme is presented that achieves 98.8% test accuracy on the MNIST dataset using an average bit-width of just 16 bits for weights and 14 bits for activations, compared to the standard 32-bit floating point values used in deep learning frameworks. Expand
Deep Learning with Limited Numerical Precision
TLDR
The results show that deep networks can be trained using only 16-bit wide fixed-point number representation when using stochastic rounding, and incur little to no degradation in the classification accuracy. Expand
Investigating the Effects of Dynamic Precision Scaling on Neural Network Training
TLDR
A novel dynamic precision scaling scheme is developed using stochastic fixed-point rounding, a quantization-error based scaling scheme, and dynamic bit-widths during training to achieve 98.8% test accuracy on the MNIST dataset. Expand
Reduced-Precision Strategies for Bounded Memory in Deep Neural Nets
TLDR
This work investigates how using reduced precision data in Convolutional Neural Networks affects network accuracy during classification and proposes a method for finding a low precision configuration for a network while maintaining high accuracy. Expand
Overcoming Challenges in Fixed Point Training of Deep Convolutional Networks
TLDR
This work is an attempt to draw a theoretical connection between low numerical precision and training algorithm stability, and propose and verify through experiments methods that are able to improve the training performance of deep convolutional networks in fixed point. Expand
Rethinking Numerical Representations for Deep Neural Networks
TLDR
This work explores unconventional narrow-precision floating-point representations as it relates to inference accuracy and efficiency to steer the improved design of future DNN platforms and presents a novel technique that drastically reduces the time required to derive the optimal precision configuration. Expand
BinaryConnect: Training Deep Neural Networks with binary weights during propagations
TLDR
BinaryConnect is introduced, a method which consists in training a DNN with binary weights during the forward and backward propagations, while retaining precision of the stored weights in which gradients are accumulated, and near state-of-the-art results with BinaryConnect are obtained on the permutation-invariant MNIST, CIFAR-10 and SVHN. Expand
Reduced-Precision Memory Value Approximation for Deep Learning
Neural networks (NNs) and deep learning currently provide the best solutions to many problems in image recognition, speech recognition, and natural language processing. As the complexity of the NNsExpand
FPGA-based training of convolutional neural networks with a reduced precision floating-point library
TLDR
An FPGA-based CNN training engine: FCTE is discussed, implemented using High-Level Synthesis (HLS), targeting the Xilinx Kintex Ultrascale XCKU115 device and it is demonstrated that an exponent width of 6 and mantissa width of 5 achieves accuracy comparable to single-precision floating-point for the MNIST and CIFAR-10 datasets. Expand
Accuracy to Throughput Trade-Offs for Reduced Precision Neural Networks on Reconfigurable Logic
TLDR
This work proposes a quantization training strategy that allows reduced precision NN inference with a lower memory footprint and competitive model accuracy, and quantitatively formulate the relationship between data representation and hardware efficiency. Expand
...
1
2
3
4
5
...