Automatically improving accuracy for floating point expressions

@article{Panchekha2015AutomaticallyIA,
  title={Automatically improving accuracy for floating point expressions},
  author={Pavel Panchekha and Alex Sanchez-Stern and James R. Wilcox and Zachary Tatlock},
  journal={Proceedings of the 36th ACM SIGPLAN Conference on Programming Language Design and Implementation},
  year={2015}
}
Scientific and engineering applications depend on floating point arithmetic to approximate real arithmetic. This approximation introduces rounding error, which can accumulate to produce unacceptable results. While the numerical methods literature provides techniques to mitigate rounding error, applying these techniques requires manually rearranging expressions and understanding the finer details of floating point arithmetic. We introduce Herbie, a tool which automatically discovers the rewrites… 

Figures from this paper

Accelerating Accuracy Improvement for Floating Point Programs via Memory Based Pruning
TLDR
A heuristic pruning strategy called Memory Based Pruning is introduced to accelerate program rewriting techniques to automatically improve accuracy for floating point programs and compares the efficiency of HMBP and the well-known accuracy-improving tool Herbie.
Efficient automated repair of high floating-point errors in numerical libraries
TLDR
Experimental results show that the proposed novel approach for efficient automated repair of high floating-point errors in numerical libraries can efficiently repair (with 100% accuracy over all randomly sampled points) highfloating point errors for 19 of the 20 numerical programs.
Finding root causes of floating point error
TLDR
Herbgrind is presented, a tool to help developers identify and address root causes in numerical code written in low-level languages like C/C++ and Fortran, and scales to applications spanning hundreds of thousands of lines.
Finding root causes of floating point error
TLDR
Herbgrind is presented, a tool to help developers identify and address root causes in numerical code written in low-level languages like C/C++ and Fortran and scales to applications spanning hundreds of thousands of lines.
Efficient Generation of Error-Inducing Floating-Point Inputs via Symbolic Execution
TLDR
This paper defines inaccuracy checks to detect large precision loss and cancellation at strategic program locations to construct specialized branches that, when covered by a given input, are likely to lead to large errors in the result.
Rigorous floating-point mixed-precision tuning
TLDR
This work presents a rigorous approach to precision allocation based on formal analysis via Symbolic Taylor Expansions, and error analysis based on interval functions, implemented in an automated tool called FPTuner that generates and solves a quadratically constrained quadratic program to obtain a precision-annotated version of the given expression.
An approach to generate correctly rounded math libraries for new floating point variants
TLDR
This paper proposes a novel approach for generating polynomial approximations that can be used to implement correctly rounded math libraries and has developed correctly rounded, yet faster, implementations of elementary functions for multiple target representations.
An Approach to Generate Correctly Rounded Math Libraries for New Floating Point Variants
TLDR
This paper proposes a novel approach for generating polynomial approximations that can be used to implement correctly rounded math libraries and has developed correctly rounded, yet faster, implementations of elementary functions for multiple target representations.
Finding Root Causes of Floating Point Error with Herbgrind
TLDR
Herbgrind is presented, a tool to help developers identify and address root causes in numerical code written in low-level C/C++ and Fortran, and dynamically tracks dependencies between operations and program outputs to avoid false positives.
Rigorous Estimation of Floating-Point Round-Off Errors with Symbolic Taylor Expansions
TLDR
FPTaylor estimates round-off errors within much tighter bounds compared to other tools on a significant number of case studies, thus contributing to future studies and tool development in this area.
...
1
2
3
4
5
...

References

SHOWING 1-10 OF 53 REFERENCES
Efficient search for inputs causing high floating-point errors
TLDR
This paper develops a heuristic search algorithm called Binary Guided Random Testing (BGRT), and shows that while concrete-testing-based error estimation methods based on maintaining shadow values at higher precision can search out higher error-inducing inputs, suit able heuristicsearch guidance is key to finding higher errors.
Certification of bounds on expressions involving rounded operators
TLDR
Gappa is a tool designed to formally verify the correctness of numerical software and hardware that generates a theorem and its proof for each verified enclosure and relies on a large companion library of facts that it relies on.
Stochastic optimization of floating-point programs with tunable precision
TLDR
The ability to generate reduced precision implementations of Intel's handwritten C numeric library which are up to 6 times faster than the original code, and achieve end-to-end speedups by optimizing kernels that can tolerate a loss of precision while still remaining correct are demonstrated.
Rigorous Estimation of Floating-Point Round-off Errors with Symbolic Taylor Expansions
TLDR
A new approach called Symbolic Taylor Expansions is developed that avoids this difficulty, and a new tool called FPTaylor is implemented embodying this approach, using rigorous global optimization instead of the more familiar interval arithmetic, affine arithmetic, and/or SMT solvers.
Precimonious: Tuning assistant for floating-point precision
TLDR
Premonious is a dynamic program analysis tool to assist developers in tuning the precision of floating-point programs and recommends a type instantiation that uses lower precision while producing an accurate enough answer without causing exceptions.
A dynamic program analysis to find floating-point accuracy problems
TLDR
This paper presents a dynamic program analysis that supports the programmer in finding accuracy problems and uses binary translation to perform every floating-point computation side by side in higher precision and a lightweight slicing approach to track the evolution of errors.
Program transformation for numerical precision
TLDR
A semantics-based transformation is proposed in the abstract interpretation framework and it aims at rewriting pieces of numerical codes in order to obtain results closer to what the computer would output if it used the exact arithmetic.
Accuracy and stability of numerical algorithms
TLDR
This book gives a thorough, up-to-date treatment of the behavior of numerical algorithms in finite precision arithmetic by combining algorithmic derivations, perturbation theory, and rounding error analysis.
Synthesizing accurate floating-point formulas
  • A. Ioualalen, M. Martel
  • Computer Science
    2013 IEEE 24th International Conference on Application-Specific Systems, Architectures and Processors
  • 2013
TLDR
This article focuses on the synthesis of accurate formulas mathematically equal to the original formulas occurring in source codes, and addresses the problem of selecting an accurate formula among all the expressions of an APEG.
Sound compilation of reals
TLDR
This work presents a programming model where the user writes a program in a real-valued implementation and specification language that explicitly includes different types of uncertainties, and presents a compilation algorithm that generates a finite-precision implementation that is guaranteed to meet the desired precision with respect to real numbers.
...
1
2
3
4
5
...