# Error estimation of floating-point summation and dot product

@article{Rump2012ErrorEO, title={Error estimation of floating-point summation and dot product}, author={Siegfried M. Rump}, journal={BIT Numerical Mathematics}, year={2012}, volume={52}, pages={201-220} }

We improve the well-known Wilkinson-type estimates for the error of standard floating-point recursive summation and dot product by up to a factor 2. The bounds are valid when computed in rounding to nearest, no higher order terms are necessary, and they are best possible. For summation there is no restriction on the number of summands. The proofs are short by using a new tool for the estimation of errors in floating-point computations which cures drawbacks of the “unit in the last place (ulp…

## Figures and Tables from this paper

## 39 Citations

Improvement of the error bound for the dot product using the unit in the first place

- Mathematics
- 2016

This paper is concerned with rounding error estimation for the dot product for numerical computations. Recently, Rump proposed a new type of error bounds for summation and the dot product using the…

Enhanced Floating-Point Sums, Dot Products, and Polynomial Values

- Computer Science
- 2010

In this chapter, we focus on the computation of sums and dot products, and on the evaluation of polynomials in IEEE 754 floating-point arithmetic.1 Such calculations arise in many fields of numerical…

Exploiting Structure in Floating-Point Arithmetic

- Mathematics, Computer ScienceMACIS
- 2015

This paper reviews some recent improvements of several classical, Wilkinson-style error bounds from linear algebra and complex arithmetic that all rely on low-level structure properties and how to exploit them in rounding error analysis.

Extension of floating-point filters to absolute and relative errors for numerical computation

- Computer ScienceJournal of Physics: Conference Series
- 2019

This paper extends floating-point filters to guarantee absolute and relative errors in order to verify the accuracy of approximate solutions in the computational geometry field.

Fast interval matrix multiplication

- Computer Science, MathematicsNumerical Algorithms
- 2011

Several methods for the multiplication of point and/or interval matrices with interval result based on new priori estimates of the error of floating-point matrix products are discussed and one of which is proved to be optimal.

Error estimates for the summation of real numbers with application to floating-point summation

- Mathematics
- 2017

Standard Wilkinson-type error estimates of floating-point algorithms involve a factor $$\gamma _k:=k\mathbf {u}/(1-k\mathbf {u})$$γk:=ku/(1-ku) for $$\mathbf {u}$$u denoting the relative rounding…

On the maximum relative error when computing integer powers by iterated multiplications in floating-point arithmetic

- Computer Science, MathematicsNumerical Algorithms
- 2015

We improve the usual relative error bound for the computation of xn through iterated multiplications by x in binary floating-point arithmetic. The obtained error bound is only slightly better than…

Error Bounds for Computer Arithmetics

- Mathematics, Computer Science2019 IEEE 26th Symposium on Computer Arithmetic (ARITH)
- 2019

This note summarizes recent progress in error bounds for compound operations performed in some computer arithmetic by identifying three types A, B, and C of weak sufficient assumptions implying new results and sharper error estimates.

Simple floating-point filters for the two-dimensional orientation problem

- Mathematics
- 2016

This paper is concerned with floating-point filters for a two dimensional orientation problem which is a basic problem in the field of computational geometry. If this problem is only approximately…

On the maximum relative error when computing x^n in floating-point arithmetic

- Mathematics, Computer ScienceArXiv
- 2014

This paper improves the usual relative error bound for the computation of x^n through iterated multiplications by x in binary floating-point arithmetic and discusses the more general problem of computing the product of n terms.

## References

SHOWING 1-10 OF 12 REFERENCES

Accurate Floating-Point Summation Part I: Faithful Rounding

- Mathematics, Computer ScienceSIAM J. Sci. Comput.
- 2008

This paper presents an algorithm for calculating a faithful rounding of a vector of floating-point numbers, which adapts to the condition number of the sum, and proves certain constants used in the algorithm to be optimal.

Ultimately Fast Accurate Summation

- Computer Science, MathematicsSIAM J. Sci. Comput.
- 2009

Two new algorithms to compute a faithful rounding of the sum of floating-point numbers and the other for a result “as if” computed in $K$-fold precision, which are the fastest known in terms of flops.

Accuracy and stability of numerical algorithms

- Mathematics, Computer Science
- 1991

This book gives a thorough, up-to-date treatment of the behavior of numerical algorithms in finite precision arithmetic by combining algorithmic derivations, perturbation theory, and rounding error analysis.

Handbook of Floating-Point Arithmetic

- Computer Science
- 2009

The Handbook of Floating-point Arithmetic is designed for programmers of numerical applications, compiler designers, programmers of floating-point algorithms, designers of arithmetic operators, and more generally, students and researchers in numerical analysis who wish to better understand a tool used in their daily work and research.

Average-case stability of Gaussian elimination

- Mathematics
- 1990

Gaussian elimination with partial pivoting is unstable in the worst case: the “growth factor” can be as large as $2^{n - 1} $, where n is the matrix dimension, resulting in a loss of $n - 1$ bits of…

On Floating Point Errors in Cholesky

- Mathematics
- 1989

Let H be a symmetric positive deenite matrix. Consider solving the linear system Hx= busing Cholesky, forward and back substitution in the standard way, yielding a computed solution ^ x. The usual…

Handling floating-point exceptions in numeric programs

- Computer ScienceTOPL
- 1996

It is argued that the cheapest short-term solution would be to give full support to most of the required (as opposed to recommended) special features of the IEC/IEEE Standard for Binary Floating-Point Arithmetic.

Fast Inclusion of Interval Matrix Multiplication

- Mathematics, Computer ScienceReliab. Comput.
- 2005

Numerical results are presented to illustrate that the new algorithms to calculate an inclusion of the product of interval matrices using rounding mode controlled computation are much faster than the conventional algorithms and that the guaranteed accuracies obtained are comparable to those of the conventional algorithm.

The vector floating-point unit in a synergistic processor element of a CELL processor

- Computer Science17th IEEE Symposium on Computer Arithmetic (ARITH'05)
- 2005

The floating-point unit in the synergistic processor element of the 1st generation multi-core CELL processor is described, optimizing the performance critical single precision FMA operations, which are executed with a 6-cycle latency at an 11FO4 cycle time.