• Corpus ID: 235829613

Faster Math Functions, Soundly

@article{Briggs2021FasterMF,
  title={Faster Math Functions, Soundly},
  author={Ian Briggs and Pavel Panchekha},
  journal={ArXiv},
  year={2021},
  volume={abs/2107.05761}
}
Standard library implementations of functions like sin and exp optimize for accuracy, not speed, because they are intended for general-purpose use. But applications tolerate inaccuracy from cancellation, rounding error, and singularities—sometimes even very high error—and many application could tolerate error in function implementations as well. This raises an intriguing possibility: speeding up numerical code by tuning standard function implementations. This paper thus introduces OpTuner, an… 

Figures from this paper

References

SHOWING 1-10 OF 58 REFERENCES
Sound Approximation of Programs with Elementary Functions
TLDR
This work presents a fully automated approach and tool which approximates elementary function calls inside small programs while guaranteeing overall user provided error bounds, and leverages existing techniques for roundoff error computation and approximation of individual Elementary function calls, and provides automated selection of many parameters.
Sound Mixed-Precision Optimization with Rewriting
TLDR
This work presents the first fully automated and sound technique and tool for optimizing the performance of floating-point and fixed-point arithmetic kernels and shows that when these two techniques are designed and applied together, they can provide higher performance improvements than each alone.
Automatically improving accuracy for floating point expressions
TLDR
Herbie is a tool which automatically discovers the rewrites experts perform to improve accuracy, and its heuristic search estimates and localizes rounding error using sampled points (rather than static error analysis), applies a database of rules to generate improvements, takes series expansions, and combines improvements for different input regions.
Rigorous floating-point mixed-precision tuning
TLDR
This work presents a rigorous approach to precision allocation based on formal analysis via Symbolic Taylor Expansions, and error analysis based on interval functions, implemented in an automated tool called FPTuner that generates and solves a quadratically constrained quadratic program to obtain a precision-annotated version of the given expression.
Stochastic optimization of floating-point programs with tunable precision
TLDR
The ability to generate reduced precision implementations of Intel's handwritten C numeric library which are up to 6 times faster than the original code, and achieve end-to-end speedups by optimizing kernels that can tolerate a loss of precision while still remaining correct are demonstrated.
Precimonious: Tuning assistant for floating-point precision
TLDR
Premonious is a dynamic program analysis tool to assist developers in tuning the precision of floating-point programs and recommends a type instantiation that uses lower precision while producing an accurate enough answer without causing exceptions.
Moving the Needle on Rigorous Floating-Point Precision Tuning
TLDR
This position paper summarizes recent progress achieved in the community on floating-point precision tuning, and showcases the component techniques present within the FPTuner framework, essentially offering a collection of “grab and go” tools that others can benefit from.
Accuracy and stability of numerical algorithms
TLDR
This book gives a thorough, up-to-date treatment of the behavior of numerical algorithms in finite precision arithmetic by combining algorithmic derivations, perturbation theory, and rounding error analysis.
Rigorous Estimation of Floating-Point Round-off Errors with Symbolic Taylor Expansions
TLDR
A new approach called Symbolic Taylor Expansions is developed that avoids this difficulty, and a new tool called FPTaylor is implemented embodying this approach, using rigorous global optimization instead of the more familiar interval arithmetic, affine arithmetic, and/or SMT solvers.
Floating-Point Precision Tuning Using Blame Analysis
TLDR
This work presents Blame Analysis, a novel dynamic approach that speeds up precision tuning of floating-point programs by determining the precision of all operands such that a given precision is achieved in the final result of the program.
...
1
2
3
4
5
...