Learn More
We have designed a radix sort algorithm for vector mul-tiprocessors and have implemented the algorithm on the CRAY Y-MP. On one processor of the Y-MP, our sort is over 5 times faster on large sorting problems than the optimized library sort provided by CRAY Research. On eight processors we achieve an additional speedup of almost 5, yielding a routine over(More)
Solution of partial differential equations by either the finite element or the finite difference methods often requires the solution of large, sparse linear systems. When the coefficient matrices associated with these linear systems are symmetric and positive definite, the systems are often solved iteratively using the preconditioned conjugate gradient(More)
For years, the computation rate of processors has been much faster than the access rate of memory banks, and this divergence in speeds has been constantly increasing in recent y ears. As a result, several shared-memory multiprocessors consist of more memory banks than processors. The object of this paper is to provide a simple model (with only a few(More)
We h a ve implemented three parallel sorting algorithms on the Connection Machine Supercom-puter model CM-2: Batcher's bitonic sort, a parallel radix sort, and a sample sort similar to Reif and Valiant's ashsort. We h a ve also evaluated the implementation of many other sorting algorithms proposed in the literature. Our computational experiments show that(More)
This paper gives an overview of the implementation of NESL, a portable nested data-parallel language. This language and its implementation are the first to fully support nested data structures as well as nested data-parallel function calls. These features allow the concise description of parallel algorithms on irregular data, such as sparse matrices and(More)
Current connectionist simulations require huge computational resources. We describe a neural network simulator for the IBM GF11, an experimental SIMD machine with 566 processors and a peak arithmetic performance of 11 Gigaflops. We present our parallel implementation of the backpropagation learning algorithm, techniques for increasing efficiency,(More)
We present a variation of the partition method for solving m th-order linear recurrences that is well-suited to vector multiprocessors. The algorithm fully utilizes both vector and multiprocessor capabilities, and reduces the number of memory accesses as compared to the more commonly used version of the partition method. Our variation uses a general loop(More)
  • Girija Narlikar, Guy Blelloch, Thomas Gross, Bruce Maggs, Sharon Burks, Catherine Copetas +20 others
  • 1999
The views and conclusions contained herein are those of the author and should not be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of ARPA or the U.S. Government. Abstract The goal of high-level parallel programming models or languages is to facilitate the writing of well-structured, simple and(More)