Accurate Floating-Point Summation Part I: Faithful Rounding

@article{Rump2008AccurateFS,
  title={Accurate Floating-Point Summation Part I: Faithful Rounding},
  author={Siegfried M. Rump and Takeshi Ogita and Shin'ichi Oishi},
  journal={SIAM J. Sci. Comput.},
  year={2008},
  volume={31},
  pages={189-224}
}
Given a vector of floating-point numbers with exact sum $s$, we present an algorithm for calculating a faithful rounding of $s$, i.e., the result is one of the immediate floating-point neighbors of $s$. If the sum $s$ is a floating-point number, we prove that this is the result of our algorithm. The algorithm adapts to the condition number of the sum, i.e., it is fast for mildly conditioned sums with slowly increasing computing time proportional to the logarithm of the condition number. All… 
Ultimately Fast Accurate Summation
  • S. Rump
  • Computer Science
    SIAM J. Sci. Comput.
  • 2009
TLDR
Two new algorithms to compute a faithful rounding of the sum of floating-point numbers and the other for a result “as if” computed in $K$-fold precision, which are the fastest known in terms of flops.
Accurate Floating-Point Summation Part II: Sign, K-Fold Faithful and Rounding to Nearest
TLDR
An algorithm for calculating the rounded-to-nearest result of $s:=\sum p_i$ for a given vector of floating-point numbers $p_i$, as well as algorithms for directed rounding, working for huge dimensions.
On the Computation of Correctly Rounded Sums
TLDR
In radix-2 floating-point arithmetic, it is proved that under reasonable conditions, an algorithm performing only round-to-nearest additions/subtractions cannot compute the round- to-NEarest sum of at least three floating- point numbers.
Faithfully Rounded Floating-point Computations
We present a pair arithmetic for the four basic operations and square root. It can be regarded as a simplified, more-efficient double-double arithmetic. The central assumption on the underlying
Correct Rounding and a Hybrid Approach to Exact Floating-Point Summation
TLDR
iFastSum improves upon the previous FastSum by requiring no additional space beyond the original array, which is destroyed, and HybridSum is presented, which combines three summation ideas together: splitting the mantissa, radix sorting, and using iFastSum, and its running time is almost a constant, independent of the condition number.
1 Faithfully Rounded Floating-point Computations
  • M. Lange
  • Computer Science, Mathematics
  • 2017
TLDR
A pair arithmetic for the four basic operations and square root is presented, which can be regarded as a simplified, more efficient double-double arithmetic and is proved to be faithfully rounded for up to 1/ √ βu − 2 operations.
Fast Reproducible Floating-Point Summation
TLDR
This work proposes a technique for floating point summation that is reproducible independent of the order of summation, and uses Rump's algorithm for error-free vector transformation, and is much more efficient than using (possibly very) high precision arithmetic.
Algorithm 908
TLDR
A novel, online algorithm for exact summation of a stream of floating-point numbers that is the fastest, most accurate, and most memory efficient among known algorithms.
Fast high precision summation
TLDR
Given a vector pi of ∞oating-point numbers with exact sum s, a new algorithm is presented that is fast in terms of measured computing time because it allows good instruction-level parallelism and more accurate and faster than competitors such as XBLAS.
Using Floating-Point Intervals for Non-Modular Computations in Residue Number System
TLDR
This work proposes to compute the interval evaluation of the fractional representation of an RNS number in floating-point arithmetic of limited precision and proposes new algorithms for magnitude comparison and general division in RNS and implements them for GPUs using the CUDA platform.
...
1
2
3
4
5
...

References

SHOWING 1-10 OF 57 REFERENCES
Accurate Floating-Point Summation Part II: Sign, K-Fold Faithful and Rounding to Nearest
TLDR
An algorithm for calculating the rounded-to-nearest result of $s:=\sum p_i$ for a given vector of floating-point numbers $p_i$, as well as algorithms for directed rounding, working for huge dimensions.
Fast and Accurate Floating Point Summation with Application to Computational Geometry
TLDR
The results show that in the absence of massive cancellation (the most common case) the cost of guaranteed accuracy is about 30–40% more than the straightforward summation, and the accurate summation algorithm improves the existing algorithm by a factor of two on a nearly coplanar set of points.
Accurate and Efficient Floating Point Summation
TLDR
Several simple algorithms for accurately computing the sum of n floating point numbers using a wider accumulator are presented and how the cost of sorting can be reduced or eliminated while retaining accuracy is investigated.
A New Distillation Algorithm for Floating-Point Summation
TLDR
This work presents a new distillation algorithm for floating-point summation which is stable, efficient, and accurate and does not rely on the choice of radix or any other specific assumption.
On properties of floating point arithmetics: numerical stability and the cost of accurate computations
TLDR
It is concluded that programmers and theorists alike must be willing to adopt a more sophisticated view of floating point arithmetic, even if only to consider that more accurate and reliable computations than those presently in common use might be possible based on stronger hypotheses than are customarily assumed.
Accurate Sum and Dot Product
Algorithms for summation and dot product of floating-point numbers are presented which are fast in terms of measured computing time. We show that the computed results are as accurate as if computed
A Distillation Algorithm for Floating-Point Summation
TLDR
This paper describes an efficient "distillation" style algorithm which produces a precise sum by exploiting the natural accuracy of compensated cancellation, applicable to all sets of data but particularly appropriate for ill-conditioned data.
Solving Triangular Systems More Accurately and Efficiently
TLDR
An algorithm that solves linear triangular systems accurately and efficiently and that its implementation should run faster than the corresponding XBLAS routine with the same output accuracy is presented.
Algorithms for arbitrary precision floating point arithmetic
  • Douglas M. Priest
  • Computer Science
    [1991] Proceedings 10th IEEE Symposium on Computer Arithmetic
  • 1991
TLDR
The author presents techniques for performing computations of very high accuracy using only straightforward floating-point arithmetic operations of limited precision, and an algorithm is presented which computes the intersection of a line and a line segment.
Adaptive Precision Floating-Point Arithmetic and Fast Robust Geometric Predicates
TLDR
This article offers fast software-level algorithms for exact addition and multiplication of arbitrary precision floating-point values and proposes a technique for adaptive precision arithmetic that can often speed these algorithms when they are used to perform multiprecision calculations that do not always require exact arithmetic, but must satisfy some error bound.
...
1
2
3
4
5
...