Matrix algorithms

@inproceedings{Stewart1998MatrixA,
  title={Matrix algorithms},
  author={G. W. Stewart},
  year={1998}
}
This book is the second volume in a projected five-volume survey of numerical linear algebra and matrix algorithms. This volume treats the numerical solution of dense and large-scale eigenvalue problems with an emphasis on algorithms and the theoretical background required to understand them. Stressing depth over breadth, Professor Stewart treats the derivation and implementation of the more important algorithms in detail. The notes and references sections contain pointers to other methods… 
An overview on the eigenvalue computation for matrices
TLDR
This contribution brings out the state-of-the-art of the algorithms for solving large-scale eigenvalue problems for both symmetric and nonsymmetric matrices separately, thereby clearly drawing a comparison between the differences in the algorithms in practical use for the two.
Algebraic Theory of Two-Grid Methods
TLDR
Results for symmetric and nonsymmetric matrices are presented in a unified way, highlighting the influence of the smoothing scheme on the convergence estimates.
The Lanczos and conjugate gradient algorithms in finite precision arithmetic
TLDR
A tribute is paid to those who have made an understanding of the Lanczos and conjugate gradient algorithms possible through their pioneering work, and to review recent solutions of several open problems that have also contributed to knowledge of the subject.
An Algebraic Substructuring Method for Large-Scale Eigenvalue Calculation
TLDR
It is shown that algebraic sub-structuring can be effectively used to solve a generalized eigenvalue problem arising from the simulation of an accelerator structure.
On the Subspace Projected Approximate Matrix method
TLDR
A comparative study of the Subspace Projected Approximate Matrix method, abbreviated SPAM, which is a fairly recent iterative method of computing a few eigenvalues of a Hermitian matrix A, shows that for certain special choices for A0, SPAM turns out to be mathematically equivalent to known eigenvalue methods.
New progress in real and complex polynomial root-finding
The Shift-Invert Residual Arnoldi Method and the Jacobi-Davidson Method: Theory and Algorithms ∗
TLDR
It is proved that the inexact SIRA method mimic the exact SIRA well provided that the inner linear systems are iteratively solved with low or modest accuracy.
Numerical Computation of the Complex Eigenvalues of a Matrix by solving a Square System of Equations
TLDR
For complex eigenpairs, instead of using Ruhe’s normalization, it is shown that the natural two norm normalization for the matrix pencil, yields an underdetermined system of equation which can be solved by LU factorization at a cheaper rate and quadratic convergence is guaranteed.
On the Sensitivity of Some Spectral Preconditioners
TLDR
First-order perturbation theory for eigenvalues and eigenvectors is used to investigate the behavior of the spectrum of the preconditioned systems using first-order approximation and the effect of the inexactness of the eigenelements on thebehavior of the resulting preconditionser when applied to accelerating the conjugate gradient method is illustrated.
...
...

References

SHOWING 1-10 OF 129 REFERENCES
Direct Methods for Sparse Matrices
TLDR
This book aims to be suitable also for a student course, probably at MSc level, and the subject is intensely practical and this book is written with practicalities ever in mind.
Jacobi's Method is More Accurate than QR
TLDR
It is shown that Jacobi’s method computes small eigenvalues of symmetric positive definite matrices with a uniformly better relative accuracy bound than QR, divide and conquer, traditional bisection, or any algorithm which first involves tridiagonalizing the matrix.
Iterative refinement of linear least squares solutions II
An iterative procedure is developed for reducing the rounding errors in the computed least squares solution to an overdetermined system of equationsAx =b, whereA is anm ×n matrix (m ≧n) of rankn. The
Iterative refinement of linear least squares solutions I
An iterative procedure is developed for reducing the rounding errors in the computed least squares solution to an overdetermined system of equationsAx =b, whereA is anm ×n matrix (m ≧n) of rankn. The
Gaussian elimination is not optimal
t. Below we will give an algorithm which computes the coefficients of the product of two square matrices A and B of order n from the coefficients of A and B with tess than 4 . 7 n l°g7 arithmetical
Loss and Recapture of Orthogonality in the Modified Gram-Schmidt Algorithm
TLDR
The special structure of the product of the Householder transformations is derived, and then used to explain and bound the loss of orthogonality in MGS, which is illustrated by deriving a numerically stable algorithm based on MGS for a class of problems which includes solution of nonsingular linear systems.
Numerical Linear Algebra and Applications
  • B. Datta
  • Computer Science, Mathematics
  • 1995
TLDR
A review of some Required Concepts from Core Linear Algebra and some useful Transformations in Numerical LinearAlgebra and Their Applications.
Lanczos algorithms for large symmetric eigenvalue computations
TLDR
This chapter discusses Lanczos Procedures with no Reorthogonalization for Real Symmetric Problems, and an Identification Test, 'Good' versus' spurious' Eigenvalues.
Rectangular reciprocal matrices, with special reference to geodetic calculations
It is general ly known that many problems dealt with by means of the method of least squares often lead to extremely inh'icate functional relations. In such cases it may therefore be very difficult
XII.—Studies in Practical Mathematics. I. The Evaluation, with Applications, of a Certain Triple Product Matrix
The solution of simultaneous linear algebraic equations, the evaluation of the adjugate or the reciprocal of a given square matrix, and the evaluation of the bilinear or quadratic form reciprocal to
...
...