• Corpus ID: 239009661

Halpern-Type Accelerated and Splitting Algorithms For Monotone Inclusions

@inproceedings{TranDinh2021HalpernTypeAA,
  title={Halpern-Type Accelerated and Splitting Algorithms For Monotone Inclusions},
  author={Quoc Tran-Dinh and Yang Luo},
  year={2021}
}
In this paper, we develop a new type of accelerated algorithms to solve some classes of maximally monotone equations as well as monotone inclusions. Instead of using Nesterov’s accelerating approach, our methods rely on a so-called Halpern-type fixed-point iteration in [32], and recently exploited by a number of researchers, including [24, 70]. Firstly, we derive a new variant of the anchored extra-gradient scheme in [70] based on Popov’s past extra-gradient method to solve a maximally monotone… 

References

SHOWING 1-10 OF 79 REFERENCES
Fast convergence of generalized forward-backward algorithms for structured monotone inclusions
In this paper, we develop rapidly convergent forward-backward algorithms for computing zeroes of the sum of finitely many maximally monotone operators. A modification of the classical
A family of projective splitting methods for the sum of two maximal monotone operators
TLDR
The projective algorithms converge under more general conditions than prior splitting methods, allowing the proximal parameter to vary from iteration to iteration, and even from operator to operator, while retaining convergence for essentially arbitrary pairs of operators.
Inertial Douglas-Rachford splitting for monotone inclusion problems
TLDR
An inertial Douglas-Rachford splitting algorithm for finding the set of zeros of the sum of two maximally monotone operators in Hilbert spaces is proposed and its convergence properties are investigated.
An Inertial Forward-Backward Algorithm for Monotone Inclusions
TLDR
An inertial forward-backward splitting algorithm to compute a zero of the sum of two monotone operators, with one of the two operators being co-coercive, inspired by the accelerated gradient method of Nesterov.
Halpern Iteration for Near-Optimal and Parameter-Free Monotone Inclusion and Strong Solutions to Variational Inequalities
TLDR
This analysis is based on a novel and simple potential-based proof of convergence of Halpern iteration, a classical iteration for finding fixed points of nonexpansive maps, and provides a series of algorithmic reductions that highlight connections between different problem classes and lead to lower bounds that certify near-optimality of the studied methods.
Accelerated schemes for a class of variational inequalities
TLDR
The main idea of the proposed algorithm is to incorporate a multi-step acceleration scheme into the stochastic mirror-prox method, which computes weak solutions with the optimal iteration complexity for SVIs.
An Accelerated HPE-Type Algorithm for a Class of Composite Convex-Concave Saddle-Point Problems
TLDR
Experimental results show that the new method outperforms Nesterov's smoothing technique, and a suitable choice of the latter stepsize yields a method with the best known (accelerated inner) iteration complexity for the aforementioned class of saddle-point problems.
Convergence Rate Analysis of Primal-Dual Splitting Schemes
  • Damek Davis
  • Computer Science, Mathematics
    SIAM J. Optim.
  • 2015
TLDR
This paper introduces a unifying scheme and uses some abstract analysis of the algorithm to prove convergence rates of the proximal point algorithm, forward-backward splitting, Peaceman--Rachford splitting, and forward- backward-forward splitting applied to the model problem.
Projected Reflected Gradient Methods for Monotone Variational Inequalities
TLDR
The projected reflected gradient algorithm with a constant stepsize is proposed, which requires only one projection onto the feasible set and only one value of the mapping per iteration and has R-linear rate of convergence under the strong monotonicity assumption.
An accelerated non-Euclidean hybrid proximal extragradient-type algorithm for convex–concave saddle-point problems
TLDR
An accelerated HPE-type method based on general Bregman distances for solving convex–concave saddle-point (SP) problems that is superior to Nesterov's smoothing scheme and works for any constant choice of proximal stepsize.
...
1
2
3
4
5
...