The Conjugate Gradient Method for Linear and Nonlinear Operator Equations

  title={The Conjugate Gradient Method for Linear and Nonlinear Operator Equations},
  author={James W. Daniel},
  journal={SIAM Journal on Numerical Analysis},
  • J. Daniel
  • Published 1 March 1967
  • Mathematics
  • SIAM Journal on Numerical Analysis

Note sur la convergence de méthodes de directions conjuguées

L’accès aux archives de la revue « Revue française d’informatique et de recherche opérationnelle, série rouge » implique l’accord avec les conditions générales d’utilisation

A unified convergence bound for conjugate gradient and accelerated gradient

This analysis provides the first direct proof of the convergence rate of linear conjugate gradient in the literature, and is based on a potential similar to the potential in Nesterov's original analysis.

Optimization by General-Purpose Methods

  • Computer Science
  • 2015

Gradient method in Sobolev spaces for nonlocal boundary-value problems

Aninnite-dimensionalgradientmethodisproposedforthenumerical solution of nonlocal quasilinear boundary-value problems. The iteration is executed for the boundary-value problem itself (i.e. on the

A conjugate gradient approach to nonlinear elliptic boundary value problems in irregular regions

A version of the conjugate gradient method is proposed for solving discrete approximations to nonlinear elliptic boundary value problems over irregular regions, with the Poisson equations being solved by the Buneman algorithm after some preprocessing.

Range-Doppler Imaging via One-Bit PMCW Radar

This work formulate the range-Doppler estimation problem as a sparse signal recovery problem and adopt the majorizationminimization (MM) approach to solve it efficiently, using logsum penalty to obtain the sparse solution.

A new constrained optimization model for solving the nonsymmetric stochastic inverse eigenvalue problem

It is proved that the proposed constrained optimization model on the manifold of so-called isospectral matrices has a minimizer and shown how the Polak-Ribiere-Polyak conjugate gradient method works on the corresponding more general manifold.

Matrix-Based Algorithms for the Optimal Design of Variable Fractional Delay FIR Filters

This paper investigates the weighted least squares (WLS) and minimax design problems for variable fractional delay (VFD) FIR filters and proposes an efficient algorithm using conjugate gradient techniques to solve the WLS solution.

On Krylov solutions to infinite-dimensional inverse linear problems

We discuss, in the context of inverse linear problems in Hilbert space, the notion of the associated infinite-dimensional Krylov subspace and we produce necessary and sufficient conditions for the



Über einige Methoden der Relaxationsrechnung

After a study of the gradient method it is shown that relaxation methods are not necessarily successive approximations taking an infinite number of steps but that it is possible to speed up convergence such that the desired result is reached in a finite number of Steps.

On a successive transformation of probability distribution and its application to the analysis of the optimum gradient method

The conjecture which was stated by Forsythe and Motzkin and was used as the logical basis of an acceleration procedure for the optimum gradient method is proved and it is said that when the matrix is ill-conditioned the convergence rate tends near to its worst possible value.

Direct and iterative methods for the solution of linear operator equations in Hilbert space

Introduction. During the last two decades the problem of justifying existing methods and finding new ones for the solution of the equation Lu = f, where f is a given vector in Hilbert space and L is

An iteration method for the solution of the eigenvalue problem of linear differential and integral operators

The present investigation designs a systematic method for finding the latent roots and the principal axes of a matrix, without reducing the order of the matrix. It is characterized by a wide field of