• Corpus ID: 219176980

Improved stochastic rounding

@article{Xia2020ImprovedSR,
  title={Improved stochastic rounding},
  author={Lu Xia and Martijn Anthonissen and Michiel E. Hochstenbach and Barry Koren},
  journal={ArXiv},
  year={2020},
  volume={abs/2006.00489}
}
Due to the limited number of bits in floating-point or fixed-point arithmetic, rounding is a necessary step in many computations. Although rounding methods can be tailored for different applications, round-off errors are generally unavoidable. When a sequence of computations is implemented, round-off errors may be magnified or accumulated. The magnification of round-off errors may cause serious failures. Stochastic rounding (SR) was introduced as an unbiased rounding method, which is widely… 

References

SHOWING 1-10 OF 20 REFERENCES

Stochastic rounding and reduced-precision fixed-point arithmetic for solving neural ODEs.

The Izhikevich neuron model is used to demonstrate that rounding has an important role in producing accurate spike timings from explicit ODE solution algorithms and consistently results in smaller errors compared to single-precision floatingpoint and fixed-point arithmetic with round-tonearest across a range of neuron behaviours and ODE solvers.

On-chip training of recurrent neural networks with limited numerical precision

It is shown that batch normalization on input sequences can help speed up training with low precision as well as high precision, and the piecewise linear activation function with stochastic rounding can achieve comparable training results with floating point precision.

Low-Precision Floating-Point Schemes for Neural Network Training

A simplified model in which both the outputs and the gradients of the neural networks are constrained to power-of-two values, just using 7 bits for their representation is introduced, significantly reducing the training time as well as the energy consumption and memory requirements during the training and inference phases.

Deep Learning with Limited Numerical Precision

The results show that deep networks can be trained using only 16-bit wide fixed-point number representation when using stochastic rounding, and incur little to no degradation in the classification accuracy.

Training Deep Neural Networks with 8-bit Floating Point Numbers

This work demonstrates, for the first time, the successful training of deep neural networks using 8-bit floating point numbers while fully maintaining the accuracy on a spectrum of deep learning models and datasets.

Rounding algorithms for IEEE multipliers

A new fast and efficient technique for computing the sticky bit directly from the carry-save form without undergoing the expense of a carry-propagate addition is presented.

Stochastic truncation for the (2+1)D Ising model

The method of 'symmetrized' stochastic truncation is applied to the (2+1)-dimensional Ising model, to calculate the ground-state energy, mass gap, magnetization and susceptibility on lattices of

Simulating Low Precision Floating-Point Arithmetic

The half-precision (fp16) floating-point format, defined in the 2008 revision of the IEEE standard for floating-point arithmetic, and a more recently proposed half-precision format bfloat16, are in...

Pareto optimality in multiobjective problems

In this study, the optimization theory of Dubovitskii and Milyutin is extended to multiobjective optimization problems, producing new necessary conditions for local Pareto optima. Cones of directions