• Corpus ID: 239009661

Halpern-Type Accelerated and Splitting Algorithms For Monotone Inclusions

@inproceedings{TranDinh2021HalpernTypeAA,
  title={Halpern-Type Accelerated and Splitting Algorithms For Monotone Inclusions},
  author={Quoc Tran-Dinh and Yang Luo},
  year={2021}
}
In this paper, we develop a new type of accelerated algorithms to solve some classes of maximally monotone equations as well as monotone inclusions. Instead of using Nesterov’s accelerating approach, our methods rely on a so-called Halpern-type fixed-point iteration in [32], and recently exploited by a number of researchers, including [24, 70]. Firstly, we derive a new variant of the anchored extra-gradient scheme in [70] based on Popov’s past extra-gradient method to solve a maximally monotone… 
The Connection Between Nesterov's Accelerated Methods and Halpern Fixed-Point Iterations
We derive a direct connection between Nesterov’s accelerated first-order algorithm and the Halpern fixed-point iteration scheme for approximating a solution of a co-coercive equation. We show that
A Stochastic Halpern Iteration with Variance Reduction for Stochastic Monotone Inclusion Problems
TLDR
Stochastic monotone inclusion problems, which widely appear in machine learning applications, are studied, includ-ing robust regression and adversarial learning, and novel variants of stochastic Halpern iteration with recursive variance reduction are proposed.
Fast OGDA in continuous and discrete time
In the framework of real Hilbert spaces we study continuous in time dynamics as well as numerical algorithms for the problem of approaching the set of zeros of a single-valued monotone and continuous

References

SHOWING 1-10 OF 79 REFERENCES
Fast convergence of generalized forward-backward algorithms for structured monotone inclusions
In this paper, we develop rapidly convergent forward-backward algorithms for computing zeroes of the sum of finitely many maximally monotone operators. A modification of the classical
A family of projective splitting methods for the sum of two maximal monotone operators
TLDR
The projective algorithms converge under more general conditions than prior splitting methods, allowing the proximal parameter to vary from iteration to iteration, and even from operator to operator, while retaining convergence for essentially arbitrary pairs of operators.
Inertial Douglas-Rachford splitting for monotone inclusion problems
An Inertial Forward-Backward Algorithm for Monotone Inclusions
  • D. Lorenz, T. Pock
  • Mathematics, Computer Science
    Journal of Mathematical Imaging and Vision
  • 2014
TLDR
An inertial forward-backward splitting algorithm to compute a zero of the sum of two monotone operators, with one of the two operators being co-coercive, inspired by the accelerated gradient method of Nesterov.
Halpern Iteration for Near-Optimal and Parameter-Free Monotone Inclusion and Strong Solutions to Variational Inequalities
TLDR
This analysis is based on a novel and simple potential-based proof of convergence of Halpern iteration, a classical iteration for finding fixed points of nonexpansive maps, and provides a series of algorithmic reductions that highlight connections between different problem classes and lead to lower bounds that certify near-optimality of the studied methods.
Accelerated schemes for a class of variational inequalities
TLDR
The main idea of the proposed algorithm is to incorporate a multi-step acceleration scheme into the stochastic mirror-prox method, which computes weak solutions with the optimal iteration complexity for SVIs.
An Accelerated HPE-Type Algorithm for a Class of Composite Convex-Concave Saddle-Point Problems
TLDR
Experimental results show that the new method outperforms Nesterov's smoothing technique, and a suitable choice of the latter stepsize yields a method with the best known (accelerated inner) iteration complexity for the aforementioned class of saddle-point problems.
Convergence Rate Analysis of Primal-Dual Splitting Schemes
  • Damek Davis
  • Mathematics, Computer Science
    SIAM J. Optim.
  • 2015
TLDR
This paper introduces a unifying scheme and uses some abstract analysis of the algorithm to prove convergence rates of the proximal point algorithm, forward-backward splitting, Peaceman--Rachford splitting, and forward- backward-forward splitting applied to the model problem.
Projected Reflected Gradient Methods for Monotone Variational Inequalities
TLDR
The projected reflected gradient algorithm with a constant stepsize is proposed, which requires only one projection onto the feasible set and only one value of the mapping per iteration and has R-linear rate of convergence under the strong monotonicity assumption.
An accelerated non-Euclidean hybrid proximal extragradient-type algorithm for convex–concave saddle-point problems
TLDR
An accelerated HPE-type method based on general Bregman distances for solving convex–concave saddle-point (SP) problems that is superior to Nesterov's smoothing scheme and works for any constant choice of proximal stepsize.
...
1
2
3
4
5
...