#### Filter Results:

#### Publication Year

1987

2015

#### Publication Type

#### Co-author

#### Publication Venue

#### Key Phrases

Learn More

For any linear system Ax ≈ b we define a set of core problems and show that the orthogonal upper bidiagonalization of [b, A] gives such a core problem. In particular we show that these core problems have desirable properties such as minimal dimensions. When a total least squares problem is solved by first finding a core problem, we show the resulting theory… (More)

The standard approaches to solving overdetermined linear systems Bx ≈ c construct minimal corrections to the vector c and/or the matrix B such that the corrected system is compatible. In ordinary least squares (LS) the correction is restricted to c, while in data least squares (DLS) it is restricted to B. In scaled total least squares (STLS) [22],… (More)

The generalized minimum residual method (GMRES) for solving linear systems Ax = b is implemented as a sequence of least squares problems involving Krylov subspaces of increasing dimensions. The most usual implementation is Modified Gram-Schmidt GMRES (MGS-GMRES). Here we show that MGS-GMRES is backward stable. The result depends on a more general result on… (More)

Given an n by n nonsingular matrix A and an n-vector v, we consider the spaces of the form AK k (It is shown that any such sequence of spaces can be generated by a unitary matrix. If zero is outside the eld of values of A, then there is a Hermitian positive deenite matrix that generates the same spaces, and, moreover, if A is close to Hermitian then there… (More)

For the finite volume discretization of a second-order elliptic model problem, we derive a posteriori error estimates which take into account an inexact solution of the associated linear algebraic system. We show that the algebraic error can be bounded by constructing an equilibrated Raviart–Thomas–Nédélec discrete vector field whose divergence is given by… (More)

Minimum residual norm iterative methods for solving linear systems Ax = b can be viewed as, and are often implemented as, sequences of least squares problems involving Krylov sub-spaces of increasing dimensions. The minimum residual method (MINRES) [C. Bounds for the least squares distance using scaled total least squares, Numer. Math., to appear] revealing… (More)

The standard approaches to solving overdetermined linear systems Bx ≈ c construct minimal corrections to the data to make the corrected system compatible. In ordinary least squares (LS) the correction is restricted to the right hand side c, while in scaled total least squares (STLS) [14, 12] corrections to both c and B are allowed, and their relative sizes… (More)

In this talk I will discuss necessary and sufficient conditions on a nonsingu-lar matrix A, such that for any initial vector r 0 , an orthogonal basis of the Krylov subspaces K n (A, r 0) is generated by a short recurrence. Orthogonality here is meant with respect to some unspecified positive definite inner product. This question is closely related to the… (More)

The standard approaches to solving overdetermined linear systems Ax ≈ b construct minimal corrections to the vector b and/or the matrix A such that the corrected system is compatible. In ordinary least squares (LS) the correction is restricted to b, while in data least squares (DLS) it is restricted to A. In scaled total least squares (Scaled TLS) [15],… (More)

The aim of the paper is to compile and compare basic theoretical facts on Krylov subspaces and block Krylov subspaces. Many Krylov (sub)space methods for solving a linear system Ax = b have the property that in exact computer arithmetic the true solution is found after ν iterations, where ν is the dimension of the largest Krylov subspace generated by A from… (More)