• Corpus ID: 238531641

Nonconvex-Nonconcave Min-Max Optimization with a Small Maximization Domain

  title={Nonconvex-Nonconcave Min-Max Optimization with a Small Maximization Domain},
  author={Dmitrii Ostrovskii and Babak Barazandeh and Meisam Razaviyayn},
We study the problem of finding approximate first-order stationary points in optimization problems of the form minx∈X maxy∈Y f(x, y), where the sets X,Y are convex and Y is compact. The objective function f is smooth, but assumed neither convex in x nor concave in y. Our approach relies upon replacing the function f(x, ⋅) with its k-th order Taylor approximation (in y) and finding a near-stationary point in the resulting surrogate problem. To guarantee its success, we establish the following… 

Faster Single-loop Algorithms for Minimax Optimization without Strong Concavity

New convergence results for two alternative single-loop algorithms – alternating GDA and smoothed GDA – under the mild assumption that the objective satisfies the PolyakLojasiewicz (PL) condition about one variable are established.

What is a Good Metric to Study Generalization of Minimax Learners?

This paper shows that primal risk, a universal metric to study generalization in minimax problems, fails in simple examples, and proposes a new metric, the primal gap, as the difference between the primal risk and its mini-mum over all models, to circumvent the issues.

An Approach for Non-Convex Uniformly Concave Structured Saddle Point Problem

Московский физико-технический институт, Россия, 141701, Московская обл., г. Долгопрудный, Институтский пер., 9 Национальный исследовательский университет «Высшая школа экономики», Россия, 101000, г.

Fairness-aware Regression Robust to Adversarial Attacks

Numerical results illustrate that the proposed adversarially robust fair models have better performance on poisoned datasets than other fair machine learning models in both prediction accuracy and group-based fairness measure.



Efficient Methods for Structured Nonconvex-Nonconcave Min-Max Optimization

A new class of structured nonconvex-nonconcave min-max optimization problems are introduced, proposing a generalization of the extragradient algorithm which provably converges to a stationary point and its iteration complexity and sample complexity bounds either match or improve the best known bounds.

The complexity of constrained min-max optimization

This result is the first to show an exponential separation between these two fundamental optimization problems in the oracle model, and comes in sharp contrast to minimization problems, where finding approximate local minima in the same setting can be done with Projected Gradient Descent using O(L/ε) many queries.

A Convergent and Dimension-Independent First-Order Algorithm for Min-Max Optimization

Motivated by the recent work of [33], we propose a variant of the min-max optimization framework where the max-player is constrained to update the maximization variable in a greedy manner until it

Greedy adversarial equilibrium: an efficient alternative to nonconvex-nonconcave min-max optimization

An algorithm is introduced that converges from any starting point to an ε-greedy adversarial equilibrium in a number of evaluations of the function f, the max-player’s gradient ∇y f(x,y), and its Hessian, that is polynomial in the dimension d, 1/ε.

On well-structured convex–concave saddle point problems and variational inequalities with monotone operators

For those acquainted with CVX (aka disciplined convex programming) of M. Grant and S. Boyd, the motivation of this work is the desire to extend the scope of CVX beyond convex minimization -- to

Semi-Proximal Mirror-Prox for Nonsmooth Composite Minimization

The theoretical convergence rate of Semi-Proximal Mirror-prox, which exhibits the optimal complexity bounds, i.e. O(1/∊2), for the number of calls to linear minimization oracle, is established.

Weakly-convex–concave min–max optimization: provable algorithms and applications in machine learning

This paper studies a family of non-convex min–max problems, whose objective function is weakly convex in the variables of minimization and is concave in the variable of maximization.

Solving Non-Convex Non-Differentiable Min-Max Games Using Proximal Gradient Method

It is shown that a simple multi-step proximal gradient descent-ascent algorithm converges to -first-order Nash equilibrium of the min-max game with the number of gradient evaluations being polynomial in 1/.

Solving a Class of Non-Convex Min-Max Games Using Iterative First Order Methods

This paper proposes a multi-step gradient descent-ascent algorithm that finds an \varepsilon--first order stationary point of the game in \widetilde O(\varpsilon^{-3.5}) iterations, which is the best known rate in the literature.

Solving variational inequalities with monotone operators on domains given by Linear Minimization Oracles

The techniques discussed can be viewed as a substantial extension of the proposed in Cox et al. (Math Program Ser B 148(1–2):143–180, 2014) method of nonsmooth convex minimization over an LMO-represented domain.