Gradient Descent Ascent in Min-Max Stackelberg Games

@article{Goktas2022GradientDA,
  title={Gradient Descent Ascent in Min-Max Stackelberg Games},
  author={Denizalp Goktas and Amy Greenwald},
  journal={ArXiv},
  year={2022},
  volume={abs/2208.09690}
}
Min-max optimization problems (i.e., min-max games) have at-tracted a great deal of attention recently as their applicability to a wide range of machine learning problems has become evident. In this paper, we study min-max games with dependent strategy sets, where the strategy of the first player constrains the behavior of the second. Such games are best understood as sequential, i.e., Stackelberg, games, for which the relevant solution concept is Stackelberg equilibrium, a generalization of… 

Figures and Tables from this paper

References

SHOWING 1-10 OF 68 REFERENCES

Convex-Concave Min-Max Stackelberg Games

This work introduces two methods that solve a large class of convex-concave min-max Stackelberg games, and shows that the methods converge in polynomial time, and demonstrates the efficacy and ef-ciency of the algorithms in practice by computing competitive equilibria in Fisher markets with varying utility structures.

Solving a Class of Non-Convex Min-Max Games Using Iterative First Order Methods

This paper proposes a multi-step gradient descent-ascent algorithm that finds an \varepsilon--first order stationary point of the game in \widetilde O(\varpsilon^{-3.5}) iterations, which is the best known rate in the literature.

The Limit Points of (Optimistic) Gradient Descent in Min-Max Optimization

This work characterize the limit points of two basic first order methods, namely Gradient Descent/Ascent (GDA) and Optimistic Gradients Descent Ascent (OGDA), and shows that both dynamics avoid unstable critical points for almost all initializations.

The complexity of constrained min-max optimization

This result is the first to show an exponential separation between these two fundamental optimization problems in the oracle model, and comes in sharp contrast to minimization problems, where finding approximate local minima in the same setting can be done with Projected Gradient Descent using O(L/Ξ΅) many queries.

Solving Large Extensive-Form Games with Strategy Constraints

This work introduces a generalized form of Counterfactual Regret Minimization that provably finds optimal strategies under any feasible set of convex constraints and demonstrates the effectiveness of the algorithm for finding strategies that mitigate risk in security games, and for opponent modeling in poker games when given only partial observations of private information.

What is Local Optimality in Nonconvex-Nonconcave Minimax Optimization?

A proper mathematical definition of local optimality for this sequential setting---local minimax is proposed, as well as its properties and existence results are presented.

Non-Convex Min-Max Optimization: Provable Algorithms and Applications in Machine Learning

This paper proposes a proximally guided stochastic subgradient method and a proxIMally guided Stochastic variance-reduced method for expected and finite-sum saddle-point problems, respectively and establishes the computation complexities of both methods for finding a nearly stationary point of the corresponding minimization problem.

Efficient Search of First-Order Nash Equilibria in Nonconvex-Concave Smooth Min-Max Problems

The proposed algorithm outperforms or matches the performance of several recently proposed schemes while, arguably, being more transparent, easier to implement, and converging with respect to a stronger criterion.

First-order methods for large-scale market equilibrium computation

It is shown that proximal gradient (a generalization of projected gradient) with a practical version of linesearch achieves linear convergence under the Proximal-PL condition, andumerical experiments show that proportional response is highly efficient for computing an approximate solution, while projected gradient with linesearch can be much faster when higher accuracy is required.

A new computational method for Stackelberg and min-max problems by use of a penalty method

It is proved that a sequence of approximated solutions converges to the correct Stackelberg solution, or the min-max solution, which is a series of nonlinear programming problems approximating the original two-level problem by application of a penalty method to a constrained parametric problem in the lower level.
...