Programming pearls: algorithm design techniques

  title={Programming pearls: algorithm design techniques},
  author={Jon Louis Bentley},
  journal={Communications of The ACM},
  • J. Bentley
  • Published 1 September 1984
  • Computer Science
  • Communications of The ACM
The September 1983 column described the "everyday" impact that algorithm design can have on programmers: an algorithmic view of a problem gives insights that may make a program simpler to unders tand and to write. In this column we' l l s tudy a contribution of the field that is less frequent but more impressive: sophisticated algorithmic methods sometimes lead to dramatic performance improvements. This column is built around one small problem, with an emphasis on the algorithms that solve it… 

Figures and Tables from this paper

An introduction to STSC's APL compiler
Although many programs can be written efficiently in APL without iteration, some programs absolutely require iteration and these aspects of APL programming neither make the language easier to use nor enhance programmer productivity.
Algorithm Design and Applications
Algorithm Design and Applications, by Michael T. Goodrich & Roberto Tamassia teaches students about designing and using algorithms, illustrating connections between topics being taught and their potential applications, increasing engagement.
How Fast Do Algorithms Improve? [Point of View]
It is unclear how broadly conclusions, such as PCAST’s, based on data from progress in linear solvers are representative of algorithms in general.
Efficiency in the APL environment—a full arsenal for attacking CPU hogs
This paper is about the quest for efficiency when using APL, and how the full arsenal of APL tools can be applied to achieve any desired efficiency while retaining most of the advantages of the APL environment for developing software.
Staged methodologies for parallel programming
A parallel programming model based on the gradual introduction of implementation detail that comprises a series of decision stages that each fix a different facet of the implementation, allowing more control and freedom of expression than typical high-level treatments of parallelism.
Partitioning tasks between a pair of interconnected heterogeneous processors: A case study
  • D. Lilja
  • Computer Science
    Concurr. Pract. Exp.
  • 1995
This paper shows how a programmer or a compiler can use a model of a heterogeneous system to determine the machine on which each subtask should be executed, and relates the relative performance of two heterogeneous machines to the communication time required to transfer partial results across their interconnection network.
Automatic inversion generates divide-and-conquer parallel programs
This paper proposes and implements a novel system that can automatically derive cost-optimal list homomorphisms from a pair of sequential programs, based on the third homomorphism theorem, and shows that a weak right inverse always exists and can be automatically generated from a wide class of sequential Programs.
A Compositional Framework for Developing Parallel Programs on Two-Dimensional Arrays
This paper proposes a compositional framework that supports users, even with little knowledge about parallel machines, to develop both correct and efficient parallel programs on dense two-dimensional arrays systematically.
Study of Parallel Algorithms Related to Members. Special Thanks to Mr. Bruce Jackson and Mr. Prashant Belathur for Providing the Necessary System Support of the Sequent Multiprocessor System
The primary purpose of this work is to study, implement and analyze the performance of parallel algorithms related to subsequence problems. The problems include string to string correction problem,


An introduction to algorithm design
This paper surveys the field of algorithm design in two ways: first by the study of a few problems in detail, and then by a systematic view of the field.
A general method for solving divide-and-conquer recurrences
A unifying method for solving recurrence relations of the form T(n) = kT(n/c) + f( n) is described that is both general in applicability and easy to apply.
Worst-Case Performance Bounds for Simple One-Dimensional Packing Algorithms
This work examines the performance of a number of simple algorithms which obtain “good” placements and shows that neither the first-fit nor the best-fit algorithm will ever use more than $\frac{17}{10}L^ * + 2$ bins.
A More Portable Fortran Random Number Generator
The program described here is an implementation of the generator described by Lewis et al. and indirectly attributed to D.H. Lehmer, and produces a sequence of positive integers, IX, by the recursion.
Dynamic programming.
The more the authors study the information processing aspects of the mind, the more perplexed and impressed they become, and it will be a very long time before they understand these processes sufficiently to reproduce them.
The art of computer programming. Vol.2: Seminumerical algorithms
This professional art of computer programming volume 2 seminumerical algorithms 3rd edition that has actually been written by is one of the best seller books in the world and is never late to read.
Gaussian elimination is not optimal
t. Below we will give an algorithm which computes the coefficients of the product of two square matrices A and B of order n from the coefficients of A and B with tess than 4 . 7 n l°g7 arithmetical
A Lower Bound for On-Line Bin Packing
Algorithm 97: Shortest path
The procedure was originally programmed in FORTRAN for the Control Data 160 desk-size computer and was limited to te t ra t ion because subroutine recursiveness in CONTROL Data 160 FORTRan has been held down to four levels in the interests of economy.
Comments on "The Relationship Between Multivalued Switching Algebra and Boolean Algebra Under Different Definitions of Complement"
  • G. Epstein
  • Mathematics, Computer Science
    IEEE Trans. Computers
  • 1973
The saving in computation and improvement in accuracy that can result from the use of this algorithm can be quite significant for chain products of large arrays and in iterative solutions of matrix equations involving chain products.