• Corpus ID: 238215552

# Anderson Acceleration as a Krylov Method with Application to Asymptotic Convergence Analysis

@article{Sterck2021AndersonAA,
title={Anderson Acceleration as a Krylov Method with Application to Asymptotic Convergence Analysis},
author={Hans De Sterck and Yunhui He},
journal={ArXiv},
year={2021},
volume={abs/2109.14181}
}
• Published 29 September 2021
• Mathematics, Computer Science
• ArXiv
Anderson acceleration (AA) is widely used for accelerating the convergence of nonlinear fixed-point methods $x_{k+1}=q(x_{k})$, $x_k \in \mathbb{R}^n$, but little is known about how to quantify the convergence acceleration provided by AA. As a roadway towards gaining more understanding of convergence acceleration by AA, we study AA($m$), i.e., Anderson acceleration with finite window size $m$, applied to the case of linear fixed-point iterations $x_{k+1}=M x_{k}+b$. We write AA($m$) as a Krylov…
1 Citations

## Figures from this paper

• Mathematics
SIAM Journal on Matrix Analysis and Applications
• 2022
The asymptotic convergence of AA(m), i.e., Anderson acceleration with window size m for accelerating fixed-point methods xk+1 = q(xk), xk ∈ Rn, is studied and it is shown that, despite the discontinuity of β(z), the iteration function Ψ(z) is Lipschitz continuous and directionally differentiable at z∗ for AA(1).

## References

SHOWING 1-10 OF 34 REFERENCES

• Mathematics
SIAM Journal on Matrix Analysis and Applications
• 2022
The asymptotic convergence of AA(m), i.e., Anderson acceleration with window size m for accelerating fixed-point methods xk+1 = q(xk), xk ∈ Rn, is studied and it is shown that, despite the discontinuity of β(z), the iteration function Ψ(z) is Lipschitz continuous and directionally differentiable at z∗ for AA(1).
• Mathematics
SIAM J. Numer. Anal.
• 2015
This paper shows that Anderson is locally r-linearly convergent if the fixed point map is a contraction and the coefficients in the linear combination remain bounded and proves q-linear convergence of Anderson(1) and, in the case of linear problems, Anderson($m$).
• Mathematics
SIAM J. Numer. Anal.
• 2020
This paper provides the first proof that Anderson acceleration (AA) improves the convergence rate of general fixed point iterations to first order by a factor of the gain at each step.
• Mathematics
SIAM J. Numer. Anal.
• 2011
It is shown that, on linear problems, Anderson acceleration without truncation is “essentially equivalent” in a certain sense to the generalized minimal residual (GMRES) method and the Type 1 variant in the Fang-Saad Anderson family is similarly essentially equivalent to the Arnoldi (full orthogonalization) method.
• Computer Science, Mathematics
Journal of Scientific Computing
• 2021
This paper explains and quantifies an improvement in linear asymptotic convergence speed for the special case of a stationary version of AA applied to ADMM by considering the spectral properties of the Jacobians of ADMM and the stationary versions of AA evaluated at the fixed point, and indicates the optimal linear convergence factors of this stationary AA-ADMM method.
• Computer Science, Mathematics
PPSC
• 2022
This work introduces three low synchronization orthogonalization algorithms into AA within SUNDIALS that reduce the total number of global reductions per iteration to a constant of 2 or 3, independent of the size of the iteration space.
• Computer Science
Journal of Computational and Graphical Statistics
• 2019
A new class of acceleration schemes that build on the Anderson acceleration technique for speeding fixed-point iterations are described that are effective at greatly accelerating the convergence of EM algorithms and is automatically scalable to high-dimensional settings.