Anderson Acceleration as a Krylov Method with Application to Asymptotic Convergence Analysis
@article{Sterck2021AndersonAA, title={Anderson Acceleration as a Krylov Method with Application to Asymptotic Convergence Analysis}, author={Hans De Sterck and Yunhui He}, journal={ArXiv}, year={2021}, volume={abs/2109.14181} }
Anderson acceleration (AA) is widely used for accelerating the convergence of nonlinear fixed-point methods $x_{k+1}=q(x_{k})$, $x_k \in \mathbb{R}^n$, but little is known about how to quantify the convergence acceleration provided by AA. As a roadway towards gaining more understanding of convergence acceleration by AA, we study AA($m$), i.e., Anderson acceleration with finite window size $m$, applied to the case of linear fixed-point iterations $x_{k+1}=M x_{k}+b$. We write AA($m$) as a Krylov…
Figures from this paper
One Citation
Linear Asymptotic Convergence of Anderson Acceleration: Fixed-Point Analysis
- MathematicsSIAM Journal on Matrix Analysis and Applications
- 2022
The asymptotic convergence of AA(m), i.e., Anderson acceleration with window size m for accelerating fixed-point methods xk+1 = q(xk), xk ∈ Rn, is studied and it is shown that, despite the discontinuity of β(z), the iteration function Ψ(z) is Lipschitz continuous and directionally differentiable at z∗ for AA(1).
References
SHOWING 1-10 OF 34 REFERENCES
Linear Asymptotic Convergence of Anderson Acceleration: Fixed-Point Analysis
- MathematicsSIAM Journal on Matrix Analysis and Applications
- 2022
The asymptotic convergence of AA(m), i.e., Anderson acceleration with window size m for accelerating fixed-point methods xk+1 = q(xk), xk ∈ Rn, is studied and it is shown that, despite the discontinuity of β(z), the iteration function Ψ(z) is Lipschitz continuous and directionally differentiable at z∗ for AA(1).
Convergence Analysis for Anderson Acceleration
- MathematicsSIAM J. Numer. Anal.
- 2015
This paper shows that Anderson is locally r-linearly convergent if the fixed point map is a contraction and the coefficients in the linear combination remain bounded and proves q-linear convergence of Anderson(1) and, in the case of linear problems, Anderson($m$).
A Proof That Anderson Acceleration Improves the Convergence Rate in Linearly Converging Fixed-Point Methods (But Not in Those Converging Quadratically)
- MathematicsSIAM J. Numer. Anal.
- 2020
This paper provides the first proof that Anderson acceleration (AA) improves the convergence rate of general fixed point iterations to first order by a factor of the gain at each step.
Anderson Acceleration for Fixed-Point Iterations
- MathematicsSIAM J. Numer. Anal.
- 2011
It is shown that, on linear problems, Anderson acceleration without truncation is “essentially equivalent” in a certain sense to the generalized minimal residual (GMRES) method and the Type 1 variant in the Fang-Saad Anderson family is similarly essentially equivalent to the Arnoldi (full orthogonalization) method.
Anderson acceleration and application to the three-temperature energy equations
- PhysicsJ. Comput. Phys.
- 2017
On the Asymptotic Linear Convergence Speed of Anderson Acceleration Applied to ADMM
- Computer Science, MathematicsJournal of Scientific Computing
- 2021
This paper explains and quantifies an improvement in linear asymptotic convergence speed for the special case of a stationary version of AA applied to ADMM by considering the spectral properties of the Jacobians of ADMM and the stationary versions of AA evaluated at the fixed point, and indicates the optimal linear convergence factors of this stationary AA-ADMM method.
Performance of Low Synchronization Orthogonalization Methods in Anderson Accelerated Fixed Point Solvers
- Computer Science, MathematicsPPSC
- 2022
This work introduces three low synchronization orthogonalization algorithms into AA within SUNDIALS that reduce the total number of global reductions per iteration to a constant of 2 or 3, independent of the size of the iteration space.
Damped Anderson Acceleration With Restarts and Monotonicity Control for Accelerating EM and EM-like Algorithms
- Computer ScienceJournal of Computational and Graphical Statistics
- 2019
A new class of acceleration schemes that build on the Anderson acceleration technique for speeding fixed-point iterations are described that are effective at greatly accelerating the convergence of EM algorithms and is automatically scalable to high-dimensional settings.