Inner-Iteration Krylov Subspace Methods for Least Squares Problems

@article{Morikuni2013InnerIterationKS,
  title={Inner-Iteration Krylov Subspace Methods for Least Squares Problems},
  author={Keiichi Morikuni and Ken Hayami},
  journal={SIAM J. Matrix Anal. Appl.},
  year={2013},
  volume={34},
  pages={1-22}
}
Stationary inner iterations in combination with Krylov subspace methods are proposed for overdetermined least squares problems. The inner iterations are efficient in terms of computational work and memory and also serve as powerful preconditioners for ill-conditioned and rank-deficient problems. Theoretical justifications for using the inner iterations as preconditioners are presented. Numerical experiments on overdetermined sparse least squares problems show that the proposed methods… 

Convergence of Inner-Iteration GMRES Methods for Least Squares Problems

A general convergence theory for the generalized minimal residual method for least squares problems preconditioned with inner iterations is developed and improved particularly in the rankdeficient case.

Convergence of Inner-Iteration GMRES Methods for Rank-Deficient Least Squares Problems

A general convergence theory for the generalized minimal residual method preconditioned by inner iterations for solving least squares problems is developed and numerical experiments show that the proposed methods are more robust and efficient compared to previous methods for some rank-deficient problems.

The State-of-the-Art of Preconditioners for Sparse Linear Least-Squares Problems

This study briefly reviews preconditioners for which software has been made available, then presents a numerical evaluation of them using performance profiles and a large set of problems arising from practical applications.

Symmetric inner-iteration preconditioning for rank-deficient least squares problems

The CG and MR-type methods such as the CGLS, LSMR, and CGNE methods preconditioned by inner iterations justify using these methods for solving least squares and minimum-norm solution problems those coefficient matrices are not necessarily of full rank.

Inner-iteration preconditioning with a symmetric splitting matrix for rank-deficient least squares problems.

Results are applied to the CG and MINRES-type methods such as the CGLS, LSMR, and CGNE methods preconditioned by inner iterations, and justify using these methods for solving least squares and minimum-norm solution problems whose coefficient matrices are not necessarily of full rank.

Implementation of interior-point methods for LP based on Krylov subspace iterative solvers with inner-iteration preconditioning

The proposed interior-point method based on iterative solvers succeeds in solving a fairly large number of LP instances from benchmark libraries under the standard stopping criteria and presents a fairly extensive benchmark test for several renowned solvers including direct and iterativesolvers.

Multistep matrix splitting iteration preconditioning for singular linear systems

Numerical experiments show that the multistep generalized shifted splitting and Hermitian and skew-Hermitian splitting iteration preconditionsing are more robust and efficient compared to standard preconditioners for some test problems of large sparse singular linear systems.

Kaczmarz-type inner-iteration preconditioned flexible GMRES methods for consistent linear systems

Numerical experiments on overdetermined and underdetermined linear systems show that the proposed method is superior to the GMRES method preconditioned by NE-SOR inner iterations in terms of total CPU time.

Modulus-Type Inner Outer Iteration Methods for Nonnegative Constrained Least Squares Problems

For the solution of large sparse nonnegative constrained linear least squares (NNLS) problems, a new iterative method is proposed which uses the CGLS method for the inner iterations and the modulus

General-purpose Preconditioners for the Conjugate Gradient (cg) and Gen

  • Mathematics
eralized minimal residual (GMRES) type methods are proposed for solving the linear least squares problem min x∈R n b − Ax 2 and the general least squares problem min x∈S x 2 , S = {x ∈ R n : b − Ax 2

References

SHOWING 1-10 OF 43 REFERENCES

Generalized approximate inverse preconditioners for least squares problems

A generalized approximate inverse (GAINV)M is constructed which approximately minimizes ∥/ −M A ∥F or ∥I −AM∥F and it is shown that although the preconditioning is expensive, it pays off in certain cases.

A Class of Incomplete Orthogonal Factorization Methods. II: Implementation and Results

This work presents, implements and test several incomplete QR factorization methods based on Givens rotations for sparse square and rectangular matrices and discusses the uses, advantages and shortcomings of the preconditioners.

Flexible Inner-Outer Krylov Subspace Methods

This paper shows how the overall space where the solution is approximated is no longer a Krylov subspace but a subspace of a larger Krylov space, thus providing a convergence theory for inner-outer methods.

Iterative methods for sparse linear systems

This chapter discusses methods related to the normal equations of linear algebra, and some of the techniques used in this chapter were derived from previous chapters of this book.

A robust incomplete factorization preconditioner for positive definite matrices

  • M. BenziM. Tuma
  • Computer Science, Mathematics
    Numer. Linear Algebra Appl.
  • 2003
A novel technique for computing a sparse incomplete factorization of a general symmetric positive definite matrix A based on A‐orthogonalization, which results in a reliable solver for highly ill‐conditioned linear systems.

Greville’s method for preconditioning least squares problems

Theoretical analysis is provided to show that under the assumption, the least squares problem preconditioned by this precONDitioner is equivalent to the original problem, and the GMRES method can determine a solution to the preconditionsed problem before breakdown happens.

Inexact Preconditioned Conjugate Gradient Method with Inner-Outer Iteration

This paper formulate an inexact preconditioned conjugate gradient algorithm for a symmetric positive definite system and analyze its convergence property, establishing a linear convergence result using a local relation of residual norms and showing that the algorithm may have the superlinear convergence property when the inner iteration is solved to high accuracy.

Preconditioning techniques for nonsymmetric and indefinite linear systems