Harry Berryman

Learn More
Runtime time preprocessing plays a major role in many efficient algorithms in computer science, as well as playing an important role in exploiting multiprocessor architectures. We give examples that elucidate the importance of run time preprocessing and show how these optimizations can be integrated into compilers. To support our arguments, we describe(More)
Sparse system solvers and general purpose codes for solving partial differential equations are examples of the many types of problems whose irregularity can result in poor performance on distributed memory machines. Often, the data structures used in these problems are very flexible. Crucial details concerning loop dependences are encoded in these(More)
This paper addresses the issue of compiling concurrent loop nests in the presence of complicated array references and irregularly distributed arrays. Arrays accessed within loops may contain accesses that make it impossible to precisely determine the reference pattern at compile time. This paper proposes a run time support mechanism that is used e ectively(More)
In the work presented here, we measured the performance of the components of the key iterative kernel of a preconditioned Krylov space i terative l inear system solver. In some sense, these numbers can be regarded as best case t imings for these kernels. We timed sweeps over meshes, sparse triangular solves, and inner products on a large three-dimensional(More)
We propose a data migration mechanism that allows an explicit and controlled mapping of data to memory. While read or write copies of each data element can be assigned to any processor's memory, longer term storage of each data element is assigned to a specific location in the memory of a particular processor. We present data that suggests that the scheme(More)