• Corpus ID: 211204754

Halpern Iteration for Near-Optimal and Parameter-Free Monotone Inclusion and Strong Solutions to Variational Inequalities

@article{Diakonikolas2020HalpernIF,
  title={Halpern Iteration for Near-Optimal and Parameter-Free Monotone Inclusion and Strong Solutions to Variational Inequalities},
  author={Jelena Diakonikolas},
  journal={ArXiv},
  year={2020},
  volume={abs/2002.08872}
}
We leverage the connections between nonexpansive maps, monotone Lipschitz operators, and proximal mappings to obtain near-optimal (i.e., optimal up to poly-log factors in terms of iteration complexity) and parameter-free methods for solving monotone inclusion problems. These results immediately translate into near-optimal guarantees for approximating strong solutions to variational inequality problems, approximating convex-concave min-max optimization problems, and minimizing the norm of the… 
Tight Last-Iterate Convergence of the Extragradient Method for Constrained Monotone Variational Inequalities
TLDR
A new approach is developed that combines the power of the sum-of-squares programming with the low dimensionality of the update rule of the extragradient method and establishes the monotonicity of a new performance measure – the tangent residual.
Halpern-Type Accelerated and Splitting Algorithms For Monotone Inclusions
TLDR
A new type of accelerated algorithms to solve some classes of maximally monotones equations as well as monotone inclusions using a so-called Halpern-type fixed-point iteration to solve convex-concave minimax problems and a new accelerated DR scheme to derive a new variant of the alternating direction method of multipliers (ADMM).
Optimistic Dual Extrapolation for Coherent Non-monotone Variational Inequalities
TLDR
OptDE is proposed, a method that only performs one gradient evaluation per iteration that is provably convergent to a strong solution under different coherent non-monotone assumptions and provides the near-optimal O 1 log 1 ✏ convergence guarantee in terms of restricted strong merit function for monotone variational inequalities.
Exact Optimal Accelerated Complexity for Fixed-Point Iterations
Despite the broad use of fixed-point iterations throughout applied mathematics, the optimal convergence rate of general fixed-point problems with nonexpansive nonlinear operators has not been
Convergence of Halpern’s Iteration Method with Applications in Optimization
Abstract Halpern’s iteration method, discovered by Halpern in 1967, is an iterative algorithm for finding fixed points of a nonexpansive mapping in Hilbert and Banach spaces. Since many optimization
The Complexity of Nonconvex-Strongly-Concave Minimax Optimization
TLDR
The complexity of the NC-SC smooth minimax problems is studied in both general and averaged smooth finite-sum settings, and a generic acceleration scheme is introduced that deploys existing gradient-based methods to solve a sequence of crafted strongly-convexstrongly-concave subproblems.
Potential Function-based Framework for Making the Gradients Small in Convex and Min-Max Optimization
TLDR
A novel potential function-based framework to study the convergence of standard methods for making the gradients small in smooth convex optimization and convex-concave min-max optimization and provides a new lower bound for minimizing norm of cocoercive operators that allows us to argue about optimality of methods in the min- max setup.
Convergence of Adaptive Methods for Equilibrium Problems in Hadamard Spaces
TLDR
New adaptive algorithms for pseudomonotone bifunctions of Lipschitz type are proposed and theorems on the weak convergence of sequences generated by the algorithms are proved.
On the Initialization for Convex-Concave Min-max Problems
TLDR
It is shown that strict-convexitystrict-concavity is sufficient to get the convergence rate to depend on the initialization and that the so-called “parameter-free” algorithms allow to achieve improved initialization-dependent asymptotic rates without any learning rate to tune.
The Connection Between Nesterov's Accelerated Methods and Halpern Fixed-Point Iterations
We derive a direct connection between Nesterov’s accelerated first-order algorithm and the Halpern fixed-point iteration scheme for approximating a solution of a co-coercive equation. We show that
...
1
2
3
...

References

SHOWING 1-10 OF 43 REFERENCES
A Universal Algorithm for Variational Inequalities Adaptive to Smoothness and Noise
TLDR
This work considers variational inequalities coming from monotone operators, a setting that includes convex minimization and convex-concave saddle-point problems, and presents a universal algorithm based on the Mirror-Prox algorithm that achieves the optimal rates for the smooth/non-smooth, and noisy/noiseless settings.
On the convergence properties of non-Euclidean extragradient methods for variational inequalities with generalized monotone operators
TLDR
This paper presents non-Euclidean extragradient (N-EG) methods for computing approximate strong solutions of GMVI problems, and demonstrates how their iteration complexities depend on the global Lipschitz or Hölder continuity properties for their operators and the smoothness properties for the distance generating function used in the N-EG algorithms.
Solving Weakly-Convex-Weakly-Concave Saddle-Point Problems as Successive Strongly Monotone Variational Inequalities
TLDR
This work proposes an algorithmic framework motivated by the inexact proximal point method, which solves the weakly monotone variational inequality corresponding to the original min-max problem by approximately solving a sequence of strongly monot one variational inequalities constructed by adding a strongly monOTone mapping to theOriginal gradient mapping.
Solving Weakly-Convex-Weakly-Concave Saddle-Point Problems as Weakly-Monotone Variational Inequality
TLDR
This paper proposes an algorithmic framework motivated by the proximal point method, which solves a sequence of strongly monotone variational inequalities constructed by adding a stronglymonotone mapping to the original mapping with a periodically updated proximal center, and establishes the first work that establishes the non-asymptotic convergence to a stationary point of a non-convexnon-concave min-max problem.
Dual extrapolation and its applications to solving variational inequalities and related problems
  • Y. Nesterov
  • Mathematics, Computer Science
    Math. Program.
  • 2007
TLDR
This paper shows that with an appropriate step-size strategy, their method is optimal both for Lipschitz continuous operators and for the operators with bounded variations.
Prox-Method with Rate of Convergence O(1/t) for Variational Inequalities with Lipschitz Continuous Monotone Operators and Smooth Convex-Concave Saddle Point Problems
We propose a prox-type method with efficiency estimate $O(\epsilon^{-1})$ for approximating saddle points of convex-concave C$^{1,1}$ functions and solutions of variational inequalities with monotone
Stochastic (Approximate) Proximal Point Methods: Convergence, Optimality, and Adaptivity
We develop model-based methods for solving stochastic convex optimization problems, introducing the approximate-proximal point, or aProx, family, which includes stochastic subgradient, proximal
Lower complexity bounds of first-order methods for convex-concave bilinear saddle-point problems
TLDR
It is proved that for strongly convex problems, O (1/t^2) is the best possible convergence rate, while it is known that gradient methods can have linear convergence on unconstrained problems.
Stochastic model-based minimization of weakly convex functions
TLDR
This work shows that under weak-convexity and Lipschitz conditions, the algorithm drives the expected norm of the gradient of the Moreau envelope to zero at the rate of $O(k^{-1/4})$.
Accelerated gradient methods for nonconvex nonlinear and stochastic programming
TLDR
The AG method is generalized to solve nonconvex and possibly stochastic optimization problems and it is demonstrated that by properly specifying the stepsize policy, the AG method exhibits the best known rate of convergence for solving general non Convex smooth optimization problems by using first-order information, similarly to the gradient descent method.
...
1
2
3
4
5
...