Updating the Inverse of a Matrix

@article{Hager1989UpdatingTI,
  title={Updating the Inverse of a Matrix},
  author={William W. Hager},
  journal={SIAM Rev.},
  year={1989},
  volume={31},
  pages={221-239}
}
  • W. Hager
  • Published 1 June 1989
  • Mathematics
  • SIAM Rev.
The Sherman–Morrison–Woodbury formulas relate the inverse of a matrix after a small-rank perturbation to the inverse of the original matrix. The history of these fomulas is presented and various applications to statistics, networks, structural analysis, asymptotic analysis, optimization, and partial differential equations are discussed. The Sherman-Morrison-Woodbury formulas express the inverse of a matrix after a small rank perturbation in terms of the inverse of the original matrix. This… 

Figures from this paper

The Sherman–Morrison–Woodbury formula for generalized linear matrix equations and applications

TLDR
A new method for numerically solving the dense matrix equation using the Sherman-Morrison-Woodbury formula formally defined in the vector form of the problem, but applied in the matrix setting to solve medium size dense problems with computational costs and memory requirements dramatically lower than with a Kron-9 ecker formulation.

Updating the Inverse of a Matrix When Removing the $i$th Row and Column with an Application to Disease Modeling

TLDR
A way is found to compute the fundamental reproductive ratio of a relapsing disease being spread by a vector among two species of host that undergo a different number of relapses.

Representations for the Drazin inverse of a modified matrix

In this paper expressions for the Drazin inverse of a modified matrix $A-CD^{d}B$ are presented in terms of the Drazin inverses of $A$ and the generalized Schur complement $D-BA^{d}C$ under fewer and

Relationship between the Inverses of a Matrix and a Submatrix

A simple and straightforward formula for computing the inverse of a submatrix in terms of the inverse of the original matrix is derived. General formulas for the inverse of submatrices of order 𝑛 −

A Singular Woodbury and Pseudo-Determinant Matrix Identities and Application to Gaussian Process Regression

TLDR
An e-cient algorithm and numerical analysis is provided for the presented determinant identities and their advantages in certain conditions which are applicable to computing log-determinant terms in likelihood functions of Gaussian process regression.
...

References

SHOWING 1-10 OF 35 REFERENCES

Methods for computing and modifying the $LDV$ factors of a matrix

TLDR
Methods are given for computing the LDV factorization of a matrix B and modifying the factorization when columns of B are added or deleted and it is shown how these techniques lead to two numerically stable methods for updating the Cholesky factorizationof a matrix following the addition or subtraction,respectively, of a Matrix of rank one.

On the inverse of the autocovariance matrix for a general moving average process

SUMMARY In this paper we show how the inverse for the general kth autocovariance matrix, for any rth order moving average process, can be obtained by a method which requires inverting no matrix

Modifying pivot elements in Gaussian elimination

The rounding-error analysis of Gaussian elimination shows that the method is stable only when the elements of the matrix do not grow excessively in the course of the reduction. Usually such growth is

Numerical techniques in mathematical programming

Some Aspects of the Cyclic Reduction Algorithm for Block Tridiagonal Linear Systems

The solution of a general block tridiagonal linear system by a cyclic odd-even reduction algorithm is considered. Under conditions of diagonal dominance, norms describing the off-diagonal blocks

Sparsity-Oriented Compensation Methods for Modified Network Solutions

TLDR
The paper gives a unified derivation and analysis of compensation methods for the efficient solution of network problems involving matrix modifications, including the removal, addition and splitting of tiodes.

The Direct Solution of the Discrete Poisson Equation on a Rectangle

where G is a rectangle, Au = 82u/8x2 + 82u/8y2, and v, w are known functions. For computational purposes, this partial differential equation is frequently replaced by a finite difference analogue.

Partitioning and Tearing Systems of Equations

TLDR
This paper contributes to the development of procedures which can be performed on a computer for analyzing the structures of the systems of equations themselves as an aid in choosing how to break them up for easier solution.

Approximations to the Multiplier Method

We analyze approximations to the multiplier method for solving an equality constrained optimization problem. The multiplier method replaces the constrained problem by the unconstrained optimization