# IEEE 754: An Interview with William Kahan

@article{Severance1998IEEE7A, title={IEEE 754: An Interview with William Kahan}, author={Charles R. Severance}, journal={Computer}, year={1998}, volume={31}, pages={114-115} }

Standards I f you were a programmer using floating point computations in the 1960s and 1970s, you had to cope with a wide variety of configurations, with each computer supporting a different range and accuracy for floating-point numbers. While most of these differences were merely annoying, some were very serious. One computer, for example, might have values that behaved as non-zero for additions but behaved as zero for division. Sometimes a programmer had to multiply all values by 1.0 or…

## 29 Citations

Floating-Point Formats and Environment

- Computer Science
- 2010

This chapter is a revision and merge of the earlier IEEE 754-1985 [12] and IEEE 854-1987 [13] standards and focuses on the floating-point arithmetic standard.

Combined Binary and Decimal Floating-Point Unit

- Computer Science
- 2008

A novel decimal fused multiply-add (FMA) based floating-point unit is developed and combined with a known binary FMA algorithm, and results show that the latencies for the binary and decimal paths are comparable to current solutions, but the area used is much larger than the individual units.

Algorithms and Arithmetic: Choose Wisely

- Computer Science2017 IEEE 24th Symposium on Computer Arithmetic (ARITH)
- 2017

This framework will expose to the reader the reason that the authors should be thinking carefully about appropriate data representations when designing custom hardware for compute, as well as clearly showing the link between these decisions and algorithmic ones.

Low-Cost Microarchitectural Support for Improved Floating-Point Accuracy

- Computer ScienceIEEE Computer Architecture Letters
- 2007

The residual register dramatically simplifies the code, providing both lower latency and better instruction-level parallelism.

Precision analysis for hardware acceleration of numerical algorithms

- Computer Science
- 2011

A new method to calculate tight bounds for the error or range of any variable within an algo rithm is presented, taking into account both input ranges and finite precision effects, which is shown to be, in general, tighter in comparison to existing methods.

Techniques and tools for implementing IEEE 754 floating-point arithmetic on VLIW integer processors

- Computer SciencePASCO
- 2010

Key points include a hierarchical description of function evaluation algorithms, the exploitation of the standard encoding of floating-point data, the automatic generation of fast and accurate polynomial evaluation schemes, and some compiler optimizations.

Towards fast and certified multiple-precision librairies

- Computer Science
- 2017

A new arithmetic library that offers sufficient precision, is fast and also certified, and is interested in ill-posed semi-definite positive optimization problems that appear in quantum chemistry or quantum information.

Trusting Floating Point Benchmarks - Are Your Benchmarks Really Data Independent?

- Computer SciencePARA
- 2006

It is observed that even a small fraction of denormal numbers in a textbook benchmark significantly increases the execution time of the benchmark, leading to the wrong conclusions about the relative efficiency of different hardware architectures and about scalability problems of a cluster benchmark.

Definitions and Basic Notions

- Computer Science
- 2010

The purpose of this chapter is to deal with basic problems: rounding, exceptions, properties of real arithmetic that become wrong in floating-point arithmetic, best choices for the radix, and radix conversions.

A Redundant Digit Floating Point System

- Engineering, Computer Science
- 2003

The work presented in this thesis proposes several techniques to improve the effectiveness of floating point arithmetic units by developing and applying a time delay model to analytically predict the performance of the floating point units.