Updating the Inverse of a Matrix

  title={Updating the Inverse of a Matrix},
  author={William W. Hager},
  journal={SIAM Rev.},
  • W. Hager
  • Published 1 June 1989
  • Mathematics
  • SIAM Rev.
The Sherman–Morrison–Woodbury formulas relate the inverse of a matrix after a small-rank perturbation to the inverse of the original matrix. The history of these fomulas is presented and various applications to statistics, networks, structural analysis, asymptotic analysis, optimization, and partial differential equations are discussed. The Sherman-Morrison-Woodbury formulas express the inverse of a matrix after a small rank perturbation in terms of the inverse of the original matrix. This… 

Figures from this paper

Updating the Inverse of a Matrix When Removing the $i$th Row and Column with an Application to Disease Modeling
A way is found to compute the fundamental reproductive ratio of a relapsing disease being spread by a vector among two species of host that undergo a different number of relapses.
The Sherman–Morrison–Woodbury formula for generalized linear matrix equations and applications
A new method for numerically solving the dense matrix equation using the Sherman-Morrison-Woodbury formula formally defined in the vector form of the problem, but applied in the matrix setting to solve medium size dense problems with computational costs and memory requirements dramatically lower than with a Kron-9 ecker formulation.
Representations for the Drazin inverse of a modified matrix
In this paper expressions for the Drazin inverse of a modified matrix $A-CD^{d}B$ are presented in terms of the Drazin inverses of $A$ and the generalized Schur complement $D-BA^{d}C$ under fewer and
Relationship between the Inverses of a Matrix and a Submatrix
A simple and straightforward formula for computing the inverse of a submatrix in terms of the inverse of the original matrix is derived. General formulas for the inverse of submatrices of order 𝑛 −
Preconditioning Sparse Nonsymmetric Linear Systems with the Sherman-Morrison Formula
It is shown how the matrix A0-1 - A-1, where A0 is a nonsingular matrix whose inverse is known or easy to compute, can be factorized in the form $U\Omega V^T$ using the Sherman--Morrison formula.


Methods for computing and modifying the $LDV$ factors of a matrix
Methods are given for computing the LDV factorization of a matrix B and modifying the factorization when columns of B are added or deleted and it is shown how these techniques lead to two numerically stable methods for updating the Cholesky factorizationof a matrix following the addition or subtraction,respectively, of a Matrix of rank one.
On the inverse of the autocovariance matrix for a general moving average process
SUMMARY In this paper we show how the inverse for the general kth autocovariance matrix, for any rth order moving average process, can be obtained by a method which requires inverting no matrix
Modifying pivot elements in Gaussian elimination
The rounding-error analysis of Gaussian elimination shows that the method is stable only when the elements of the matrix do not grow excessively in the course of the reduction. Usually such growth is
Numerical techniques in mathematical programming
Some Aspects of the Cyclic Reduction Algorithm for Block Tridiagonal Linear Systems
The solution of a general block tridiagonal linear system by a cyclic odd-even reduction algorithm is considered. Under conditions of diagonal dominance, norms describing the off-diagonal blocks
Sparsity-Oriented Compensation Methods for Modified Network Solutions
The paper gives a unified derivation and analysis of compensation methods for the efficient solution of network problems involving matrix modifications, including the removal, addition and splitting of tiodes.
The Direct Solution of the Discrete Poisson Equation on a Rectangle
where G is a rectangle, Au = 82u/8x2 + 82u/8y2, and v, w are known functions. For computational purposes, this partial differential equation is frequently replaced by a finite difference analogue.
Partitioning and Tearing Systems of Equations
This paper contributes to the development of procedures which can be performed on a computer for analyzing the structures of the systems of equations themselves as an aid in choosing how to break them up for easier solution.
Approximations to the Multiplier Method
We analyze approximations to the multiplier method for solving an equality constrained optimization problem. The multiplier method replaces the constrained problem by the unconstrained optimization