Updating the Inverse of a Matrix

  title={Updating the Inverse of a Matrix},
  author={William W. Hager},
  journal={SIAM Rev.},
  • W. Hager
  • Published 1 June 1989
  • Mathematics, Computer Science
  • SIAM Rev.
The Sherman–Morrison–Woodbury formulas relate the inverse of a matrix after a small-rank perturbation to the inverse of the original matrix. The history of these fomulas is presented and various applications to statistics, networks, structural analysis, asymptotic analysis, optimization, and partial differential equations are discussed. The Sherman-Morrison-Woodbury formulas express the inverse of a matrix after a small rank perturbation in terms of the inverse of the original matrix. This… 
Updating the Inverse of a Matrix When Removing the $i$th Row and Column with an Application to Disease Modeling
A way is found to compute the fundamental reproductive ratio of a relapsing disease being spread by a vector among two species of host that undergo a different number of relapses.
Using the Sherman–Morrison–Woodbury inversion formula for a fast solution of tridiagonal block Toeplitz systems
Abstract A fast numerical algorithm for solving systems of linear equations with tridiagonal block Toeplitz matrices is presented. The algorithm is based on a preliminary factorization of the
The Sherman-Morrison-Woodbury formula for generalized linear matrix equations and applications
A new method for numerically solving the dense matrix equation using the Sherman-Morrison-Woodbury formula formally defined in the vector form of the problem, but applied in the matrix setting to solve medium size dense problems with computational costs and memory requirements dramatically lower than with a Kron-9 ecker formulation.
On deriving the Drazin inverse of a modified matrix
Abstract This paper addresses the problem of deriving formulas for the Drazin inverse of a modified matrix. First we focus on obtaining formulas of Sherman–Morrison–Woodbury type for singular
On Moore–Penrose inverses of quasi-Kronecker structured matrices
Abstract The Moore–Penrose inverse and generalized inverse of A + X 1 X 2 * , where A , X 1 , X 2 are complex matrices are given under various assumptions. We use the result to derive the
Inversion and pseudoinversion of block arrowhead matrices
A generalization of the well known notion of the Schur complement is defined and exploited and a new representation for numeric and symbolic computing of the Moore–Penrose inverse of special type of block arrowhead matrices is presented.
Representations for the Drazin inverse of a modified matrix
In this paper expressions for the Drazin inverse of a modified matrix $A-CD^{d}B$ are presented in terms of the Drazin inverses of $A$ and the generalized Schur complement $D-BA^{d}C$ under fewer and
Relationship between the Inverses of a Matrix and a Submatrix
A simple and straightforward formula for computing the inverse of a submatrix in terms of the inverse of the original matrix is derived. General formulas for the inverse of submatrices of order 𝑛 −
Representations of generalized inverses of partitioned matrix involving Schur complement
This article considers some representations of {1,3},{1,4},{2,3} and { 1,2,4}-inverses of the partitioned matrix M which are equivalent to some rank additivity conditions of M and applies these results to generalizations of the Sherman-Morrison-Woodbury-type formulae.
Preconditioning Sparse Nonsymmetric Linear Systems with the Sherman-Morrison Formula
It is shown how the matrix A0-1 - A-1, where A0 is a nonsingular matrix whose inverse is known or easy to compute, can be factorized in the form $U\Omega V^T$ using the Sherman--Morrison formula.


Methods for computing and modifying the $LDV$ factors of a matrix
Methods are given for computing the LDV factorization of a matrix B and modifying the factorization when columns of B are added or deleted. The methods may be viewed as a means for updating the
On the inverse of the autocovariance matrix for a general moving average process
SUMMARY In this paper we show how the inverse for the general kth autocovariance matrix, for any rth order moving average process, can be obtained by a method which requires inverting no matrix
Modifying pivot elements in Gaussian elimination
The rounding-error analysis of Gaussian elimination shows that the method is stable only when the elements of the matrix do not grow excessively in the course of the reduction. Usually such growth is
Numerical techniques in mathematical programming
The application of numerically stable matrix decompositions to minimization problems involving linear constraints is discussed and shown to be feasible without undue loss of efficiency. Part A
Some Aspects of the Cyclic Reduction Algorithm for Block Tridiagonal Linear Systems
The solution of a general block tridiagonal linear system by a cyclic odd-even reduction algorithm is considered. Under conditions of diagonal dominance, norms describing the off-diagonal blocks
Sparsity-Oriented Compensation Methods for Modified Network Solutions
The paper gives a unified derivation and analysis of compensation methods for the efficient solution of network problems involving matrix modifications. These methods have a very wide range of
The Direct Solution of the Discrete Poisson Equation on a Rectangle
where G is a rectangle, Au = 82u/8x2 + 82u/8y2, and v, w are known functions. For computational purposes, this partial differential equation is frequently replaced by a finite difference analogue.
Partitioning and Tearing Systems of Equations
Introduction. Kron [1] has developed a technique for tearing large, sparse linear systems of algebraic equations into smaller systems, then putting the solutions of these smaller systems together to
Partitioning, tearing and modification of sparse linear systems
Abstract The computational complexity of partitioning sparse matrices is developed graph-theoretically. The results are used to study tearing and modification, and to show that single-element tearing
Approximations to the Multiplier Method
We analyze approximations to the multiplier method for solving an equality constrained optimization problem. The multiplier method replaces the constrained problem by the unconstrained optimization