Performance Evaluation of Parallelizing Techniques for Matrix Computations on Shared Memory Parallel Computers

Abstract

There are two methods of parallelizing the programs for numerical linear algebra on shared memory parallel computers. One is the method which parallelizes a round sum of process in the main routine by using OpenMP. The other is the method where the BLAS routines for basic operations in linear algebra are highly parallelized. In this paper, we evaluate the… (More)

Topics

11 Figures and Tables