# Accurate Floating-Point Summation Part I: Faithful Rounding

@article{Rump2008AccurateFS,
title={Accurate Floating-Point Summation Part I: Faithful Rounding},
author={Siegfried M. Rump and Takeshi Ogita and Shin'ichi Oishi},
journal={SIAM J. Sci. Comput.},
year={2008},
volume={31},
pages={189-224}
}
• Published 1 October 2008
• Computer Science
• SIAM J. Sci. Comput.
Given a vector of floating-point numbers with exact sum $s$, we present an algorithm for calculating a faithful rounding of $s$, i.e., the result is one of the immediate floating-point neighbors of $s$. If the sum $s$ is a floating-point number, we prove that this is the result of our algorithm. The algorithm adapts to the condition number of the sum, i.e., it is fast for mildly conditioned sums with slowly increasing computing time proportional to the logarithm of the condition number. All…
175 Citations
Ultimately Fast Accurate Summation
• S. Rump
• Computer Science
SIAM J. Sci. Comput.
• 2009
Two new algorithms to compute a faithful rounding of the sum of floating-point numbers and the other for a result “as if” computed in $K$-fold precision, which are the fastest known in terms of flops.
Accurate Floating-Point Summation Part II: Sign, K-Fold Faithful and Rounding to Nearest
• Computer Science
SIAM J. Sci. Comput.
• 2008
An algorithm for calculating the rounded-to-nearest result of $s:=\sum p_i$ for a given vector of floating-point numbers $p_i$, as well as algorithms for directed rounding, working for huge dimensions.
On the Computation of Correctly Rounded Sums
• Computer Science
IEEE Transactions on Computers
• 2009
In radix-2 floating-point arithmetic, it is proved that under reasonable conditions, an algorithm performing only round-to-nearest additions/subtractions cannot compute the round- to-NEarest sum of at least three floating- point numbers.
Faithfully Rounded Floating-point Computations
• Mathematics, Computer Science
ACM Trans. Math. Softw.
• 2020
We present a pair arithmetic for the four basic operations and square root. It can be regarded as a simplified, more-efficient double-double arithmetic. The central assumption on the underlying
Correct Rounding and a Hybrid Approach to Exact Floating-Point Summation
• Computer Science
SIAM J. Sci. Comput.
• 2009
iFastSum improves upon the previous FastSum by requiring no additional space beyond the original array, which is destroyed, and HybridSum is presented, which combines three summation ideas together: splitting the mantissa, radix sorting, and using iFastSum, and its running time is almost a constant, independent of the condition number.
1 Faithfully Rounded Floating-point Computations
• M. Lange
• Computer Science, Mathematics
• 2017
A pair arithmetic for the four basic operations and square root is presented, which can be regarded as a simplified, more efficient double-double arithmetic and is proved to be faithfully rounded for up to 1/ √ βu − 2 operations.
Fast Reproducible Floating-Point Summation
• Computer Science
2013 IEEE 21st Symposium on Computer Arithmetic
• 2013
This work proposes a technique for floating point summation that is reproducible independent of the order of summation, and uses Rump's algorithm for error-free vector transformation, and is much more efficient than using (possibly very) high precision arithmetic.
Algorithm 908
• Computer Science
ACM Trans. Math. Softw.
• 2010
A novel, online algorithm for exact summation of a stream of floating-point numbers that is the fastest, most accurate, and most memory efficient among known algorithms.
Fast high precision summation
• Computer Science
• 2010
Given a vector pi of ∞oating-point numbers with exact sum s, a new algorithm is presented that is fast in terms of measured computing time because it allows good instruction-level parallelism and more accurate and faster than competitors such as XBLAS.
Using Floating-Point Intervals for Non-Modular Computations in Residue Number System
This work proposes to compute the interval evaluation of the fractional representation of an RNS number in floating-point arithmetic of limited precision and proposes new algorithms for magnitude comparison and general division in RNS and implements them for GPUs using the CUDA platform.