Andrew DuBois

Learn More
Double precision floating point Sparse Matrix-Vector Multiplication (SMVM) is a critical computational kernel used in iterative solvers for systems of sparse linear equations. The poor data locality exhibited by sparse matrices along with the high memory bandwidth requirements of SMVM result in poor performance on general purpose processors. Field(More)
Double precision floating point Sparse Matrix-Vector Multiplication (SMVM) is a critical computational kernel used in iterative solvers for systems of sparse linear equations. The poor data locality exhibited by sparse matrices along with the high memory bandwidth requirements of SMVM result in poor performance on general purpose processors. Field(More)
The conjugate gradient is a prominent iterative method for solving systems of sparse linear equations. Large-scale scientific applications often utilize a conjugate gradient solver at their computational core. Since a single iteration of a conjugate gradient solver requires a sparse matrix-vector multiply operation it is imperative that this operation be(More)
This paper describes the first use of a network processing unit (NPU) to perform hardware-based image composition in a distributed rendering system. The image composition step is a notorious bottleneck in a clustered rendering system. Furthermore, image compositing algorithms do not necessarily scale as data size and number of nodes increase. Previous(More)
This work presents a detailed implementation of a double precision, non-preconditioned, Conjugate Gradient algorithm on a Roadrunner heterogeneous supercomputer node. These nodes utilize the Cell Broadband Engine Architecture™ in conjunction with x86 Opteron™ processors from AMD. We implement a common Conjugate Gradient algorithm, on a(More)
Microprocessor-based systems are the most common design for high-performance computing (HPC) platforms. In these systems, several thousands of microprocessors can participate in a single calculation that could take weeks or months to complete. When used in this manner, a fault in any of the microprocessors could cause the computation to crash or cause(More)
The solution to a nonsingular linear system Ax=b lies in a Krylov space whose dimension is the degree of the minimal polynomial of A (where A is a matrix, x & b are vectors). If this minimal polynomial of A has a low degree, a Krylov method has the potential of rapid convergence [1]. When solving a system of linear equations Ax=b, if the coefficient matrix(More)
  • 1