An accelerated non-Euclidean hybrid proximal extragradient-type algorithm for convex–concave saddle-point problems

@article{Kolossoski2017AnAN,
  title={An accelerated non-Euclidean hybrid proximal extragradient-type algorithm for convex–concave saddle-point problems},
  author={O. Kolossoski and Renato D. C. Monteiro},
  journal={Optimization Methods and Software},
  year={2017},
  volume={32},
  pages={1244 - 1272}
}
This paper describes an accelerated HPE-type method based on general Bregman distances for solving convex–concave saddle-point (SP) problems. The algorithm is a special instance of a non-Euclidean hybrid proximal extragradient framework introduced by Svaiter and Solodov [An inexact hybrid generalized proximal point algorithm and some new results on the theory of Bregman functions, Math. Oper. Res. 25(2) (2000), pp. 214–230] where the prox sub-inclusions are solved using an accelerated gradient… 
An efficient adaptive accelerated inexact proximal point method for solving linearly constrained nonconvex composite problems
TLDR
This paper proposes an efficient adaptive variant of a quadratic penalty accelerated inexact proximal point (QP-AIPP) method and its variant which generates a sequence of proximal subproblems where the stepsizes are adaptively changed according to the responses obtained from the calls to the accelerated composite gradient algorithm.
A Doubly Accelerated Inexact Proximal Point Method for Nonconvex Composite Optimization Problems
This paper describes and establishes the iteration-complexity of a doubly accelerated inexact proximal point (D-AIPP) method for solving the nonconvex composite minimization problem whose objective
Complexity of a Quadratic Penalty Accelerated Inexact Proximal Point Method for Solving Linearly Constrained Nonconvex Composite Programs
This paper analyzes the iteration-complexity of a quadratic penalty accelerated inexact proximal point method for solving linearly constrained nonconvex composite programs. More specifically, the
Halpern-Type Accelerated and Splitting Algorithms For Monotone Inclusions
TLDR
A new type of accelerated algorithms to solve some classes of maximally monotones equations as well as monotone inclusions using a so-called Halpern-type fixed-point iteration to solve convex-concave minimax problems and a new accelerated DR scheme to derive a new variant of the alternating direction method of multipliers (ADMM).
Accelerated Stochastic Algorithms for Convex-Concave Saddle-Point Problems
  • Renbo Zhao
  • Computer Science, Mathematics
    Mathematics of Operations Research
  • 2021
TLDR
The first stochastic restart scheme for a class of convex-concave saddle-point problems, based on the primal-dual hybrid gradient framework, achieves the state-of-the-art oracle complexity and may be of independent interest.
Optimal Algorithms for Stochastic Three-Composite Convex-Concave Saddle Point Problems
TLDR
This work designs an algorithm based on the primal-dual hybrid gradient framework, that achieves the state-of-the-art oracle complexity and develops a novel stochastic restart scheme, whose oracles complexity is strictly better than any of the existing ones, even in the deterministic case.
Optimal Stochastic Algorithms for Convex-Concave Saddle-Point Problems
TLDR
Stochastic first-order primal-dual algorithms to solve a class of convex-concave saddle-point problems and achieves the state-of-the-art oracle complexity and may be of independent interest.
Improved Pointwise Iteration-Complexity of A Regularized ADMM and of a Regularized Non-Euclidean HPE Framework
This paper describes a regularized variant of the alternating direction method of multipliers (ADMM) for solving linearly constrained convex programs. It is shown that the pointwise
A FISTA-type accelerated gradient algorithm for solving smooth nonconvex composite optimization problems
In this paper, we describe and establish iteration-complexity of two accelerated composite gradient (ACG) variants to solve a smooth nonconvex composite optimization problem whose objective function
An Accelerated Inexact Proximal Point Method for Solving Nonconvex-Concave Min-Max Problems
This paper presents a quadratic-penalty type method for solving linearly-constrained composite nonconvex-concave min-max problems. The method consists of solving a sequence of penalty subproblems
...
1
2
3
4
...

References

SHOWING 1-10 OF 41 REFERENCES
An Accelerated HPE-Type Algorithm for a Class of Composite Convex-Concave Saddle-Point Problems
TLDR
Experimental results show that the new method outperforms Nesterov's smoothing technique, and a suitable choice of the latter stepsize yields a method with the best known (accelerated inner) iteration complexity for the aforementioned class of saddle-point problems.
Accelerating Block-Decomposition First-Order Methods for Solving Composite Saddle-Point and Two-Player Nash Equilibrium Problems
TLDR
This article considers the (two-player) composite Nash equilibrium (CNE) problem with a separable nonsmooth part, which is known to include the composite saddle-point (CSP) problem as a special case and proposes a new instance of the BD-HPE framework that approximately solves them using an accelerated gradient method.
A Hybrid Approximate Extragradient – Proximal Point Algorithm Using the Enlargement of a Maximal Monotone Operator
TLDR
It is demonstrated that the modified forward-backward splitting algorithm of Tseng falls within the presented general framework and allows significant relaxation of tolerance requirements imposed on the solution of proximal point subproblems.
Accelerating block-decomposition first-order methods for solving generalized saddle-point and Nash equilibrium problems
TLDR
Two algorithms from the block-decomposition hybrid proximal-extragradient framework for solving monotone inclusion problems with a two-block structure are considered, one of which substantially outperforms the other both theoretically and computationally on many relevant GSP and GNE instances.
Optimal Primal-Dual Methods for a Class of Saddle Point Problems
TLDR
This work presents a novel accelerated primal-dual (APD) method for solving a class of deterministic and stochastic saddle point problems (SPPs) and demonstrates an optimal rate of convergence not only in terms of its dependence on the number of the iteration, but also on a variety of problem parameters.
On the convergence properties of non-Euclidean extragradient methods for variational inequalities with generalized monotone operators
TLDR
This paper presents non-Euclidean extragradient (N-EG) methods for computing approximate strong solutions of GMVI problems, and demonstrates how their iteration complexities depend on the global Lipschitz or Hölder continuity properties for their operators and the smoothness properties for the distance generating function used in the N-EG algorithms.
Accelerated schemes for a class of variational inequalities
TLDR
The main idea of the proposed algorithm is to incorporate a multi-step acceleration scheme into the stochastic mirror-prox method, which computes weak solutions with the optimal iteration complexity for SVIs.
Iteration-Complexity of Block-Decomposition Algorithms and the Alternating Direction Method of Multipliers
TLDR
A framework of block-decomposition prox-type algorithms for solving the monotone inclusion problem and shows that any method in this framework is also a special instance of the hybrid proximal extragradient (HPE) method introduced by Solodov and Svaiter is shown.
Complexity of Variants of Tseng's Modified F-B Splitting and Korpelevich's Methods for Hemivariational Inequalities with Applications to Saddle-point and Convex Optimization Problems
TLDR
This paper considers both a variant of Tseng's modified forward-backward splitting method and an extension of Korpelevich's method for solving hemivariational inequalities with Lipschitz continuous operators as special cases of the hybrid proximal extragradient method introduced by Solodov and Svaiter.
An Inexact Hybrid Generalized Proximal Point Algorithm and Some New Results on the Theory of Bregman Functions
TLDR
A new Bregman-function-based algorithm which is a modification of the generalized proximal point method for solving the variational inequality problem with a maximal monotone operator and eliminates the assumption of pseudomonotonicity, which was standard in proving convergence for paramonotone operators.
...
1
2
3
4
5
...