Fast high precision summation

@article{Rump2010FastHP,
  title={Fast high precision summation},
  author={Siegfried M. Rump and Takeshi Ogita and Shin'ichi Oishi},
  journal={Nonlinear Theory and Its Applications, IEICE},
  year={2010},
  volume={1},
  pages={2-24}
}
Given a vector pi of ∞oating-point numbers with exact sum s, we present a new algorithm with the following property: Either the result is a faithful rounding of s, or otherwise the result has a relative error not larger than eps K cond( P pi) for K to be specifled. The statements are also true in the presence of under∞ow, the computing time does not depend on the exponent range, and no extra memory is required. Our algorithm is fast in terms of measured computing time because it allows good… 

Figures and Tables from this paper

Ultimately Fast Accurate Summation
  • S. Rump
  • Computer Science
    SIAM J. Sci. Comput.
  • 2009
TLDR
Two new algorithms to compute a faithful rounding of the sum of floating-point numbers and the other for a result “as if” computed in $K$-fold precision, which are the fastest known in terms of flops.
Fast Reproducible Floating-Point Summation
TLDR
This work proposes a technique for floating point summation that is reproducible independent of the order of summation, and uses Rump's algorithm for error-free vector transformation, and is much more efficient than using (possibly very) high precision arithmetic.
Parallel Reproducible Summation
TLDR
This work proposes a technique for floating point summation that is reproducible independent of the order of summation, and uses Rump's algorithm for error-free vector transformation, which is much more efficient than using (possibly very) high precision arithmetic.
Parallel Accurate and Reproducible Summation
TLDR
This article proposes two efficient parallel algorithms for summing n floating-point numbers to improve the reproducibility of the summations compared to those computed by the naive algorithm and this regardless of the number of processors used for the computations.
An Efficient Summation Algorithm for the Accuracy, Convergence and Reproducibility of Parallel Numerical Methods
TLDR
A new parallel algorithm for summing a sequence of floating-point numbers which scales up easily with the number of processors, adds numbers of the same exponents first.
Parallel Online Exact Summation of Floating-point Numbers by Applying MapReduce of Java8
TLDR
This study develops a summation program that can be applied to a stream with MapReduce and can calculate at high-speed with keeping correctly rounded.
Conjugate Gradient Solvers with High Accuracy and Bit-wise Reproducibility between CPU and GPU using Ozaki scheme
TLDR
This study presents an accurate and reproducible implementation of the unpreconditioned CG method on x86 CPUs and NVIDIA GPUs using the Ozaki scheme and shows some examples where the standard FP64 implementation of CG results in nonidentical results across different CPUs and GPUs.
A note on Dekker’s FastTwoSum algorithm
TLDR
The original assumptions for an error-free transformation via the FastTwoSum algorithm are reminded, the conditions for arbitrary bases are generalized and a possible modification of the algorithm is discussed to extend its applicability even further.
Parallel online exact sum for Java8
  • Naoshi Sakamoto
  • Computer Science
    2016 IEEE/ACIS 15th International Conference on Computer and Information Science (ICIS)
  • 2016
TLDR
This study develops a summation program that can be applied to a stream with MapReduce and can calculate at high-speed with keeping correctly rounded.
...
1
2
...

References

SHOWING 1-10 OF 49 REFERENCES
Accurate Floating-Point Summation Part I: Faithful Rounding
TLDR
This paper presents an algorithm for calculating a faithful rounding of a vector of floating-point numbers, which adapts to the condition number of the sum, and proves certain constants used in the algorithm to be optimal.
Software for Doubled-Precision Floating-Point Computations
TLDR
A modffication of Dekker's method is presented and is proved to be valid in most existing arithmetics, while the original method is valid only in a qmte restricted class of anthmetms.
A New Distillation Algorithm for Floating-Point Summation
TLDR
This work presents a new distillation algorithm for floating-point summation which is stable, efficient, and accurate and does not rely on the choice of radix or any other specific assumption.
Accurate and Efficient Floating Point Summation
TLDR
Several simple algorithms for accurately computing the sum of n floating point numbers using a wider accumulator are presented and how the cost of sorting can be reduced or eliminated while retaining accuracy is investigated.
Accurate Sum and Dot Product
Algorithms for summation and dot product of floating-point numbers are presented which are fast in terms of measured computing time. We show that the computed results are as accurate as if computed
A Distillation Algorithm for Floating-Point Summation
TLDR
This paper describes an efficient "distillation" style algorithm which produces a precise sum by exploiting the natural accuracy of compensated cancellation, applicable to all sets of data but particularly appropriate for ill-conditioned data.
Fast and Accurate Floating Point Summation with Application to Computational Geometry
TLDR
The results show that in the absence of massive cancellation (the most common case) the cost of guaranteed accuracy is about 30–40% more than the straightforward summation, and the accurate summation algorithm improves the existing algorithm by a factor of two on a nearly coplanar set of points.
Adaptive Precision Floating-Point Arithmetic and Fast Robust Geometric Predicates
TLDR
This article offers fast software-level algorithms for exact addition and multiplication of arbitrary precision floating-point values and proposes a technique for adaptive precision arithmetic that can often speed these algorithms when they are used to perform multiprecision calculations that do not always require exact arithmetic, but must satisfy some error bound.
On properties of floating point arithmetics: numerical stability and the cost of accurate computations
TLDR
It is concluded that programmers and theorists alike must be willing to adopt a more sophisticated view of floating point arithmetic, even if only to consider that more accurate and reliable computations than those presently in common use might be possible based on stronger hypotheses than are customarily assumed.
Algorithms for arbitrary precision floating point arithmetic
  • Douglas M. Priest
  • Computer Science
    [1991] Proceedings 10th IEEE Symposium on Computer Arithmetic
  • 1991
TLDR
The author presents techniques for performing computations of very high accuracy using only straightforward floating-point arithmetic operations of limited precision, and an algorithm is presented which computes the intersection of a line and a line segment.
...
1
2
3
4
5
...