Convergence Analysis for Anderson Acceleration

  title={Convergence Analysis for Anderson Acceleration},
  author={Alexander Raymond Toth and C. T. Kelley},
  journal={SIAM J. Numer. Anal.},
Anderson($m$) is a method for acceleration of fixed point iteration which stores m+1 prior evaluations of the fixed point map and computes the new iteration as a linear combination of those evaluations. Anderson(0) is fixed point iteration. In this paper we show that Anderson($m$) is locally r-linearly convergent if the fixed point map is a contraction and the coefficients in the linear combination remain bounded. Without assumptions on the coefficients, we prove q-linear convergence of… 

Anderson Acceleration for Nonsmooth Fixed Point Problems

. We give new convergence results of Anderson acceleration for the composite max fixed point problem. We prove that Anderson(1) and EDIIS(1) are q-linear convergent with a smaller q-factor than

On the Asymptotic Linear Convergence Speed of Anderson Acceleration, Nesterov Acceleration, and Nonlinear GMRES

This work considers nonlinear convergence acceleration methods for fixed-point iteration of AA, nonlinear GMRES, and Nesterov-type acceleration, and determines coefficients that result in optimal asymptotic convergence factors, given knowledge of the spectrum of q'(x) at the fixed point.

Anderson Acceleration as a Krylov Method with Application to Asymptotic Convergence Analysis

This work finds that the AA(m) residual polynomials observe a periodic memory effect where increasing powers of the error iteration matrix M act on the initial residual as the iteration number increases, and derives several further results based on these polynomial residual update formulas.

Linear Asymptotic Convergence of Anderson Acceleration: Fixed-Point Analysis

The asymptotic convergence of AA(m), i.e., Anderson acceleration with window size m for accelerating fixed-point methods xk+1 = q(xk), xk ∈ Rn, is studied and it is shown that, despite the discontinuity of β(z), the iteration function Ψ(z) is Lipschitz continuous and directionally differentiable at z∗ for AA(1).

A Proof That Anderson Acceleration Improves the Convergence Rate in Linearly Converging Fixed-Point Methods (But Not in Those Converging Quadratically)

This paper provides the first proof that Anderson acceleration (AA) improves the convergence rate of general fixed point iterations to first order by a factor of the gain at each step.

A proof that Anderson acceleration increases the convergence rate in linearly converging fixed point methods (but not in quadratically converging ones)

This paper provides the first proof that Anderson acceleration (AA) increases the convergence rate of general fixed point iterations. AA has been used for decades to speed up nonlinear solvers in

Newton-Anderson at Singular Points

In this paper we develop convergence and acceleration theory for Anderson acceleration applied to Newton’s method for nonlinear systems in which the Jacobian is singular at a solution. For these

Globally Convergent Type-I Anderson Acceleration for Nonsmooth Fixed-Point Iterations

This work proposes the first globally convergent variant of Anderson acceleration assuming only that the fixed-point iteration is non-expansive, and shows by extensive numerical experiments that many first order algorithms can be improved, especially in their terminal convergence, with the proposed algorithm.

Anderson Acceleration of Proximal Gradient Methods

This work introduces novel methods for adapting Anderson acceleration to (non-smooth and constrained) proximal gradient algorithms and proposes a simple scheme for stabilization that combines the global worst-case guarantees of proximalgradient methods with the local adaptation and practical speed-up of Anderson acceleration.

Convergence analysis of Anderson-type acceleration of Richardson's iteration

It is established that sufficient conditions for convergence are established for Anderson extrapolation to accelerate the (stationary) Richardson iterative method for sparse linear systems, and an augmented version of this technique is proposed.



Anderson Acceleration for Fixed-Point Iterations

It is shown that, on linear problems, Anderson acceleration without truncation is “essentially equivalent” in a certain sense to the generalized minimal residual (GMRES) method and the Type 1 variant in the Fang-Saad Anderson family is similarly essentially equivalent to the Arnoldi (full orthogonalization) method.

Nonlinear Krylov and moving nodes in the method of lines

Globally Convergent Inexact Newton Methods

The primary goal is to introduce and analyze new inexact Newton methods, but consideration is also given to “globally convergence” features designed to improve convergence from arbitrary starting points.

Elliptic Preconditioner for Accelerating the Self-Consistent Field Iteration in Kohn-Sham Density Functional Theory

The elliptic preconditioner is shown to be more effective in accelerating the convergence of a fixed point iteration than the existing approaches for large inhomogeneous systems at low temperature.

Inexact Newton Methods for Singular Problems

In this paper we describe the eeects of an inexact implementation of Newton's method on the behavior of the iteration for certain nonlinear equations in Banach space for which the Fr echet derivative

Inexact newton methods for singular problems

In this paper we describe the effects of an inexact implementation of Newton's method on the behavior of the iteration for certain nonlinear equations in Banach space for which the Frechet derivative

An analysis for the DIIS acceleration method used in quantum chemistry calculations

This work features an analysis for the acceleration technique DIIS that is standardly used in most of the important quantum chemistry codes, e.g. in DFT and Hartree–Fock calculations and in the Coupled Cluster method and shows that for the general nonlinear case, DIIS corresponds to a projected quasi-Newton/secant method.

A Reflective Newton Method for Minimizing a Quadratic Function Subject to Bounds on Some of the Variables

A new algorithm, a reflective Newton method, for the minimization of a quadratic function of many variables subject to upper and lower bounds on some of the variables, which appears to have significant practical potential for large-scale problems.