Programming Assignment 3: Sparse Matrix-vector Multiplication 2 Compressed Sparse Row Format

  • Published 2011

Abstract

We saw in the last programming assignment that the key to a fast implementation of conjugate gradients is a fast implementation of matrix-vector multiplication. However, thus far we’ve only talked about dense matrix-vector multiplication, and, in practice, conjugate gradients is almost always used when the matrices are sparse. That is, the number of nonzero elements in the matrix is much less than the number of elements in the matrix. For example, if the matrix has order 10,000 but, on average only 10 nonzeroes in each row, then we’ll use 100,000,000 doubles or 800 megabytes to store the matrix in the usual dense format. However, of the 100,000,000 doubles only 100,000 are nonzero. That is, we’ll really only be using 1/1000 of the storage allocated — i.e., 800 kilobytes of storage. There are many solutions to the problem of wasted storage for sparse matrices. In many cases sparse matrices have some special structure, that makes it quite simple to store them efficiently. For example, tridiagonal matrices and block tridiagonal matrices arise naturally in the solution of certain types of PDE’s, and these types of matrices can be stored with little information beyond a listing of the nonzero elements. On the other hand, in many other applications (e.g., circuit simulation) the sparse matrices have no special structure. These are the types of matrices we’ll be interested in for programming assignment 3.

Cite this paper

@inproceedings{2011ProgrammingA3, title={Programming Assignment 3: Sparse Matrix-vector Multiplication 2 Compressed Sparse Row Format}, author={}, year={2011} }