A Flexible Inner-Outer Preconditioned GMRES Algorithm

@article{Saad1993AFI,
title={A Flexible Inner-Outer Preconditioned GMRES Algorithm},
journal={SIAM J. Sci. Comput.},
year={1993},
volume={14},
pages={461-469}
}
• Published 1 March 1993
• Computer Science
• SIAM J. Sci. Comput.
A variant of the GMRES algorithm is presented that allows changes in the preconditioning at every step. There are many possible applications of the new algorithm, some of which are briefly discussed. In particular, a result of the flexibility of the new variant is that any iterative method can be used as a preconditioner. For example, the standard GMRES algorithm itself can be used as a preconditioner, as can CGNR (or CGNE), the conjugate gradient method applied to the normal equations. However…
1,432 Citations
• Computer Science
SIAM J. Sci. Comput.
• 2001
A flexible version of the QMR algorithm is presented which allows for the use of a different preconditioner at each step of the algorithm, and it is shown that in certain cases FQMR can produce approximations with lower residual norms than QMR.
• Computer Science
• 2014
A flexible version of the CMRH algorithm is presented that allows varying preconditioning at every step of the algorithm, so that any iterative methods can be incorporated as a preconditionser in the inner steps.
• Computer Science
Journal of Applied Mathematics and Computing
• 2013
A flexible version of the CMRH algorithm is presented that allows varying preconditioning at every step of the algorithm, so that any iterative methods can be incorporated as a preconditionser in the inner steps.
• Computer Science
• 2012
A flexible version of GPBi-CG algorithm which allows for the use of a different preconditioner at each step of the algorithm, illustrating the convergence and robustness of the flexible iterative method.
• Computer Science
• 2011
An extension of GMRES, multi-preconditioned GMRES is described, which allows the use of more than one preconditioner, and some theoretical results, a practical algorithm is proposed, and numerical results from problems in domain decomposition and PDE-constrained optimization are presented.
• Computer Science
SIAM J. Matrix Anal. Appl.
• 2006
A generalization of the conjugate gradient method that uses multiple preconditioners, combining them automatically in an optimal way is proposed, useful for domain decomposition techniques and other problems in which the need for more than one preconditionser arises naturally.
• Computer Science
Numer. Linear Algebra Appl.
• 1996
A Krylov subspace technique, based on incomplete orthogonalization of the Krylov vectors, which can be considered as a truncated version of GMRES, which is flexible to variable preconditioning and results in a robust approach for solving indefinite non-Hermitian linear systems.
• Computer Science
2018 Progress in Electromagnetics Research Symposium (PIERS-Toyama)
• 2018
A new preconditioner based on a Successive Minimal Residual (SMR) method, which insensibly refines the convergence rate of the Conjugate Orthgonalconjugate Gradient (COCG) method suitable for symmetric coefficient matrix.
• Computer Science
• 2007
Right preconditioning is considered, i.e., the equivalent linear system is AMw = f , with Mu = w, and the additive Schwarz preconditionser is considered.
• Computer Science
• 1994
In this paper, some of the recent advancements in the search for eeective iterative solvers are highlighted.

References

SHOWING 1-7 OF 7 REFERENCES

• Computer Science
Numer. Linear Algebra Appl.
• 1994
Recently Eirola and Nevanlinna have proposed an iterativ<: solution method for unsymmetric linear systems, in which the preconditioner is updated from step to step. Following their ideas we suggest
• Computer Science
SIAM J. Sci. Comput.
• 1990
To improve the global convergence properties of these basic algorithms, hybrid methods based on Powell's dogleg strategy are proposed, as well as linesearch backtracking procedures.
• Computer Science
• 1977
A particular class of regular splittings of not necessarily symmetric M-matrices is proposed. If the matrix is symmetric, this splitting is combined with the conjugate-gradient method to provide a
• Computer Science, Mathematics
• 1986
We present an iterative method for solving linear systems, which has the property of minimizing at every step the norm of the residual vector over a Krylov subspace. The algorithm is derived from t...
A new technique based upon the least squares polynomial in the set S is proposed, i.e. thePolynomial $t_k$ which minimizes $||1 - \lambda t_k (\lambda )||_w$, where $|| \cdot ||_w$ is an $L_2$ norm with respect to a weight function defined on S.