Zdenek Strakos

Learn More
Given a nonincreasing positive sequence, f(0) f(1) : : : f(n ? 1) > 0, it is shown that there exists an n by n matrix A and a vector r 0 with kr 0 k = f(0) such that f(k) = kr k k, k = 1; : : :; n ? 1, where r k is the residual at step k of the GMRES algorithm applied to the linear system Ax = b, with initial residual r 0 = b?Ax 0. Moreover, the matrix A(More)
LetA be a real symmetric positive definite matrix. We consider three particular questions, namely estimates for the error in linear systemsAx=b, minimizing quadratic functional minx(x T Ax−2b T x) subject to the constraint ‖x‖=α, α<‖A −1 b‖, and estimates for the entries of the matrix inverseA −1. All of these questions can be formulated as a problem of(More)
It has been widely observed that Krylov space solvers based on two three term re currences can give signi cantly less accurate residuals than mathematically equivalent solvers implemented with three two term recurrences In this paper we attempt to justify this di erence theoretically by analyzing the gap between the recursively and the explicitly computed(More)
Minimum residual norm iterative methods for solving linear systems Ax = b can be viewed as, and are often implemented as, sequences of least squares problems involving Krylov subspaces of increasing dimensions. The minimum residual method (MINRES) [C. Paige and M. Saunders, SIAM J. Numer. Anal., 12 (1975), pp. 617–629] and generalized minimum residual(More)
The generalized minimum residual method (GMRES) [Y. Saad and M. Schultz, SIAM J. Sci. Statist. Comput., 7 (1986), pp. 856–869] for solving linear systems Ax = b is implemented as a sequence of least squares problems involving Krylov subspaces of increasing dimensions. The most usual implementation is Modified Gram-Schmidt GMRES (MGS-GMRES). Here we show(More)
The standard approaches to solving overdetermined linear systems Bx ≈ c construct minimal corrections to the vector c and/or the matrix B such that the corrected system is compatible. In ordinary least squares (LS) the correction is restricted to c, while in data least squares (DLS) it is restricted to B. In scaled total least squares (STLS) [22],(More)
Abstract. We analyze the residuals of GMRES [9], when the method is applied to tridiagonal Toeplitz matrices. We first derive formulas for the residuals as well as their norms when GMRES is applied to scaled Jordan blocks. This problem has been studied previously by Ipsen [5], Eiermann and Ernst [2], but we formulate and prove our results in a different(More)
The conjugate gradient method (CG) for solving linear systems of algebraic equations represents a highly nonlinear finite process. Since the original paper of Hestenes and Stiefel published in 1952, it has been linked with the Gauss-Christoffel quadrature approximation of Riemann-Stieltjes distribution functions determined by the data, i.e., with a(More)
We study Krylov subspace methods for solving unsymmetric linear algebraic systems that minimize the norm of the residual at each step (minimal residual (MR) methods). MR methods are often formulated in terms of a sequence of least squares (LS) problems of increasing dimension. We present several basic identities and bounds for the LS residual. These results(More)
For the finite volume discretization of a second-order elliptic model problem, we derive a posteriori error estimates which take into account an inexact solution of the associated linear algebraic system. We show that the algebraic error can be bounded by constructing an equilibrated Raviart–Thomas–Nédélec discrete vector field whose divergence is given by(More)