IEEE 754: An Interview with William Kahan

  title={IEEE 754: An Interview with William Kahan},
  author={Charles R. Severance},
Standards I f you were a programmer using floating point computations in the 1960s and 1970s, you had to cope with a wide variety of configurations, with each computer supporting a different range and accuracy for floating-point numbers. While most of these differences were merely annoying, some were very serious. One computer, for example, might have values that behaved as non-zero for additions but behaved as zero for division. Sometimes a programmer had to multiply all values by 1.0 or… 
Floating-Point Formats and Environment
This chapter is a revision and merge of the earlier IEEE 754-1985 [12] and IEEE 854-1987 [13] standards and focuses on the floating-point arithmetic standard.
Combined Binary and Decimal Floating-Point Unit
A novel decimal fused multiply-add (FMA) based floating-point unit is developed and combined with a known binary FMA algorithm, and results show that the latencies for the binary and decimal paths are comparable to current solutions, but the area used is much larger than the individual units.
Algorithms and Arithmetic: Choose Wisely
  • G. Constantinides
  • Computer Science
    2017 IEEE 24th Symposium on Computer Arithmetic (ARITH)
  • 2017
This framework will expose to the reader the reason that the authors should be thinking carefully about appropriate data representations when designing custom hardware for compute, as well as clearly showing the link between these decisions and algorithmic ones.
Low-Cost Microarchitectural Support for Improved Floating-Point Accuracy
The residual register dramatically simplifies the code, providing both lower latency and better instruction-level parallelism.
Precision analysis for hardware acceleration of numerical algorithms
A new method to calculate tight bounds for the error or range of any variable within an algo rithm is presented, taking into account both input ranges and finite precision effects, which is shown to be, in general, tighter in comparison to existing methods.
Techniques and tools for implementing IEEE 754 floating-point arithmetic on VLIW integer processors
Key points include a hierarchical description of function evaluation algorithms, the exploitation of the standard encoding of floating-point data, the automatic generation of fast and accurate polynomial evaluation schemes, and some compiler optimizations.
Towards fast and certified multiple-precision librairies
A new arithmetic library that offers sufficient precision, is fast and also certified, and is interested in ill-posed semi-definite positive optimization problems that appear in quantum chemistry or quantum information.
Trusting Floating Point Benchmarks - Are Your Benchmarks Really Data Independent?
It is observed that even a small fraction of denormal numbers in a textbook benchmark significantly increases the execution time of the benchmark, leading to the wrong conclusions about the relative efficiency of different hardware architectures and about scalability problems of a cluster benchmark.
Definitions and Basic Notions
The purpose of this chapter is to deal with basic problems: rounding, exceptions, properties of real arithmetic that become wrong in floating-point arithmetic, best choices for the radix, and radix conversions.
A Redundant Digit Floating Point System
The work presented in this thesis proposes several techniques to improve the effectiveness of floating point arithmetic units by developing and applying a time delay model to analytically predict the performance of the floating point units.