Primal-dual extrapolation methods for monotone inclusions under local Lipschitz continuity with applications to variational inequality, conic constrained saddle point, and convex conic optimization problems

@article{Lu2022PrimaldualEM,
  title={Primal-dual extrapolation methods for monotone inclusions under local Lipschitz continuity with applications to variational inequality, conic constrained saddle point, and convex conic optimization problems},
  author={Zhaosong Lu and Sanyou Mei},
  journal={ArXiv},
  year={2022},
  volume={abs/2206.00973}
}
In this paper we consider a class of structured monotone inclusion (MI) problems that consist of finding a zero in the sum of two monotone operators, in which one is maximal monotone while another is locally Lipschitz continuous. In particular, we first propose a primal-dual extrapolation (PDE) method for solving a structured strongly MI problem by modifying the classical forward-backward splitting method by using a point and operator extrapolation technique, in which the parameters are… 

References

SHOWING 1-10 OF 25 REFERENCES

Simple and optimal methods for stochastic variational inequalities, I: operator extrapolation

TLDR
Stochastic operator extrapolation (SOE) achieves the optimal complexity for solving a fundamental problem, i.e., stochastic smooth and strongly monotone VI, for the first time in the literature.

A Unifying Framework of Accelerated First-Order Approach to Strongly Monotone Variational Inequalities

TLDR
A unifying framework incorporating several momentum-related search directions for solving strongly monotone variational inequalities and shows that a similar extra-point approach achieves the optimal iteration complexity bound of O( √ κ ln(1/ )) for this class of problems.

New First-Order Algorithms for Stochastic Variational Inequalities

In this paper, we propose two new solution schemes to solve the stochastic strongly monotone variational inequality problems: the stochastic extra-point solution scheme and the stochastic

Prox-Method with Rate of Convergence O(1/t) for Variational Inequalities with Lipschitz Continuous Monotone Operators and Smooth Convex-Concave Saddle Point Problems

We propose a prox-type method with efficiency estimate $O(\epsilon^{-1})$ for approximating saddle points of convex-concave C$^{1,1}$ functions and solutions of variational inequalities with monotone

Dual extrapolation and its applications to solving variational inequalities and related problems

  • Y. Nesterov
  • Mathematics, Computer Science
    Math. Program.
  • 2007
TLDR
This paper shows that with an appropriate step-size strategy, their method is optimal both for Lipschitz continuous operators and for the operators with bounded variations.

A Modified Forward-Backward Splitting Method for Maximal Monotone Mappings

  • P. Tseng
  • Mathematics
    SIAM J. Control. Optim.
  • 2000
TLDR
A modification to the forward-backward splitting method for finding a zero of the sum of two maximal monotone mappings is proposed, under which the method converges assuming only the forward mapping is (Lipschitz) continuous on some closed convex subset of its domain.

On Accelerated Methods for Saddle-Point Problems with Composite Structure

TLDR
This work considers strongly-convex-strongly-concave saddle-point problems with general non-bilinear objective and different condition numbers with respect to the primal and the dual variables and proposes a variance reduction algorithm with complexity estimates superior to the existing bounds in the literature.

Complexity of Variants of Tseng's Modified F-B Splitting and Korpelevich's Methods for Hemivariational Inequalities with Applications to Saddle-point and Convex Optimization Problems

TLDR
This paper considers both a variant of Tseng's modified forward-backward splitting method and an extension of Korpelevich's method for solving hemivariational inequalities with Lipschitz continuous operators as special cases of the hybrid proximal extragradient method introduced by Solodov and Svaiter.

Primal-Dual First-Order Methods for Affinely Constrained Multi-Block Saddle Point Problems

TLDR
A convenient notion of ǫ-saddle point is proposed, under which the convergence rate of several proposed algorithms are analyzed, and an in-depth comparison between EGMM (fully primal-dual method) and ADMM (approximate dual method) is made over the multi-block optimization problems to illustrate the advantage of the EGMM.

A Forward-Backward Splitting Method for Monotone Inclusions Without Cocoercivity

TLDR
This work proposes a simple modification of the forward-backward splitting method for finding a zero in the sum of two monotone operators that does not require cocoercivity of the single-valued operator.