• Corpus ID: 55613424

A Primal-Dual Algorithm for General Convex-Concave Saddle Point Problems

@article{Hamedani2018APA,
  title={A Primal-Dual Algorithm for General Convex-Concave Saddle Point Problems},
  author={Erfan Yazdandoost Hamedani and Necdet Serhat Aybat},
  journal={arXiv: Optimization and Control},
  year={2018}
}
In this paper we propose a primal-dual algorithm with a momentum term that can be viewed as a generalization of the method proposed by Chambolle and Pock in 2016 to solve saddle point problems defined by a convex-concave function $\mathcal{L}(x,y)=f(x)+\Phi(x,y)-h(y)$ with a general coupling term $\Phi(x,y)$ that is not assumed to be bilinear. Given a saddle point $(x^*,y^*)$, assuming $\nabla_y\Phi(\cdot,\cdot)$ is Lipschitz and $\nabla_x\Phi(\cdot,y)$ is Lipschitz in $x$ for any fixed $y$, we… 

Figures and Tables from this paper

SAPD+: An Accelerated Stochastic Method for Nonconvex-Concave Minimax Problems

The efficiency of SAPD+ is demonstrated on a distributionally robust learning problem with a weakly convex cost and also on a multi-class classification problem in deep learning.

Efficient Algorithms for Smooth Minimax Optimization

A new algorithm combining Mirror-Prox and Nesterov's AGD is proposed, and it is shown that it can find global optimum in $\tilde{O}( 1/k^2)$ iterations, improving over current state-of-the-art rate of $O(1/k)$.

A stochastic variance-reduced accelerated primal-dual method for finite-sum saddle-point problems

A variance-reduced primal-dual algorithm with Bregman distance functions for solving convex-concave saddle-point problems with finite-sum structure and nonbilinear coupling function and is proved to converge with oracle complexities.

An accelerated minimax algorithm for convex-concave saddle point problems with nonsmooth coupling function

A novel algorithm is proposed under the name of OGAProx, consisting of an optimistic gradient ascent step in the smooth variable coupled with a proximal step of the regulariser, and which is alternated with a proxy step inThe nonsmooth component of the coupling function.

Near-Optimal Algorithms for Minimax Optimization

The first algorithm with $\tilde{O}(\sqrt{\kappa_{\mathbf x}\kappa- y}})$ gradient complexity is presented, matching the lower bound up to logarithmic factors.

Accelerated Primal-Dual Algorithms for a Class of Convex-Concave Saddle-Point Problems with Non-Bilinear Coupling Term

We develop two new primal-dual algorithms to solve a class of convex-concave saddle-point problems involving non-bilinear coupling function, which covers many existing and brand-new applications as

A Doubly-Randomized Block-Coordinate Primal-Dual Method for Large-scale Saddle Point Problems

In this paper, we consider a large-scale convex-concave saddle point problem in a finite-sum form that arises in machine learning problems involving empirical risk minimization, e.g., robust

On Iteration Complexity of a First-Order Primal-Dual Method for Nonlinear Convex Cone Programming

  • Lei ZhaoDaoli Zhu
  • Computer Science, Mathematics
    Journal of the Operations Research Society of China
  • 2021
This paper introduces a flexible first-order primal-dual algorithm, called the variant auxiliary problem principle (VAPP), for solving NCCP problems when the objective function and constraints are convex but may be nonsmooth.

Optimal Algorithms for Stochastic Three-Composite Convex-Concave Saddle Point Problems

This work designs an algorithm based on the primal-dual hybrid gradient framework, that achieves the state-of-the-art oracle complexity and develops a novel stochastic restart scheme, whose oracles complexity is strictly better than any of the existing ones, even in the deterministic case.

Augmented Lagrangian based first-order methods for convex and nonconvex programs: nonergodic convergence and iteration complexity

A nonergodic convergence rate result of an augmented Lagrangian (AL) based FOM for convex problems with functional constraints is established and a novel AL-based FOM is designed for problems with non-convex objective and convex constraint functions.
...

Iteration Complexity of Randomized Primal-Dual Methods for Convex-Concave Saddle Point Problems

A class of randomized primal-dual methods to contend with large-scale saddle point problems defined by a convex-concave function and the proposed algorithmic framework to solve kernel matrix learning problem is implemented and tested against other state-of-the-art solvers.

Linear Convergence of the Primal-Dual Gradient Method for Convex-Concave Saddle Point Problems without Strong Convexity

It is proved that if the coupling matrix A has full column rank, the vanilla primal-dual gradient method can achieve linear convergence even if f is not strongly convex, which generalizes previous work which either requires f and g to be quadratic functions or requires proximal mappings for both $f and g.

A Doubly-Randomized Block-Coordinate Primal-Dual Method for Large-scale Saddle Point Problems

In this paper, we consider a large-scale convex-concave saddle point problem in a finite-sum form that arises in machine learning problems involving empirical risk minimization, e.g., robust

Mirror Prox algorithm for multi-term composite minimization and semi-separable problems

In the paper, we develop a composite version of Mirror Prox algorithm for solving convex–concave saddle point problems and monotone variational inequalities of special structure, allowing to cover

A Primal-Dual Parallel Method with $O(1/\epsilon)$ Convergence for Constrained Composite Convex Programs

A new primal-dual type algorithm with $O(1/\epsilon)$ convergence for general constrained convex programs that can be implemented in parallel with low complexity even when the original problem is composite and non-separable.

Randomized First-Order Methods for Saddle Point Optimization

It is shown that when applied to linearly constrained problems, RPDs are equivalent to certain randomized variants of the alternating direction method of multipliers (ADMM), while a direct extension of ADMM does not necessarily converge when the number of blocks exceeds two.

An accelerated non-Euclidean hybrid proximal extragradient-type algorithm for convex–concave saddle-point problems

An accelerated HPE-type method based on general Bregman distances for solving convex–concave saddle-point (SP) problems that is superior to Nesterov's smoothing scheme and works for any constant choice of proximal stepsize.

Optimal Primal-Dual Methods for a Class of Saddle Point Problems

This work presents a novel accelerated primal-dual (APD) method for solving a class of deterministic and stochastic saddle point problems (SPPs) and demonstrates an optimal rate of convergence not only in terms of its dependence on the number of the iteration, but also on a variety of problem parameters.

An Accelerated HPE-Type Algorithm for a Class of Composite Convex-Concave Saddle-Point Problems

Experimental results show that the new method outperforms Nesterov's smoothing technique, and a suitable choice of the latter stepsize yields a method with the best known (accelerated inner) iteration complexity for the aforementioned class of saddle-point problems.

Stochastic Variance Reduction Methods for Saddle-Point Problems

Convex-concave saddle-point problems where the objective functions may be split in many components are considered, and recent stochastic variance reduction methods are extended to provide the first large-scale linearly convergent algorithms.