# Gradient Descent Ascent in Min-Max Stackelberg Games

@article{Goktas2022GradientDA,
title={Gradient Descent Ascent in Min-Max Stackelberg Games},
author={Denizalp Goktas and Amy Greenwald},
journal={ArXiv},
year={2022},
volume={abs/2208.09690}
}
• Published 20 August 2022
• Computer Science
• ArXiv
Min-max optimization problems (i.e., min-max games) have at-tracted a great deal of attention recently as their applicability to a wide range of machine learning problems has become evident. In this paper, we study min-max games with dependent strategy sets, where the strategy of the first player constrains the behavior of the second. Such games are best understood as sequential, i.e., Stackelberg, games, for which the relevant solution concept is Stackelberg equilibrium, a generalization of…

## References

SHOWING 1-10 OF 68 REFERENCES

• Computer Science
NeurIPS
• 2021
This work introduces two methods that solve a large class of convex-concave min-max Stackelberg games, and shows that the methods converge in polynomial time, and demonstrates the efﬁcacy and ef-ciency of the algorithms in practice by computing competitive equilibria in Fisher markets with varying utility structures.
• Computer Science
NeurIPS
• 2019
This paper proposes a multi-step gradient descent-ascent algorithm that finds an \varepsilon--first order stationary point of the game in \widetilde O(\varpsilon^{-3.5}) iterations, which is the best known rate in the literature.
• Computer Science
NeurIPS
• 2018
This work characterize the limit points of two basic first order methods, namely Gradient Descent/Ascent (GDA) and Optimistic Gradients Descent Ascent (OGDA), and shows that both dynamics avoid unstable critical points for almost all initializations.
• Computer Science
STOC
• 2021
This result is the first to show an exponential separation between these two fundamental optimization problems in the oracle model, and comes in sharp contrast to minimization problems, where finding approximate local minima in the same setting can be done with Projected Gradient Descent using O(L/ε) many queries.
• Computer Science, Economics
AAAI
• 2019
This work introduces a generalized form of Counterfactual Regret Minimization that provably finds optimal strategies under any feasible set of convex constraints and demonstrates the effectiveness of the algorithm for finding strategies that mitigate risk in security games, and for opponent modeling in poker games when given only partial observations of private information.
• Computer Science
ArXiv
• 2019
This work generalises Nesterov's argument -- used in single-objective optimisation to derive a lower bound for a class of first-order black box optimisation algorithms -- to games and proposes a definition of the condition number arising from the lower bound analysis that matches the conditioning observed in upper bounds.
• Computer Science
ICML
• 2020
A proper mathematical definition of local optimality for this sequential setting---local minimax is proposed, as well as its properties and existence results are presented.
• Computer Science
ArXiv
• 2018
This paper proposes a proximally guided stochastic subgradient method and a proxIMally guided Stochastic variance-reduced method for expected and finite-sum saddle-point problems, respectively and establishes the computation complexities of both methods for finding a nearly stationary point of the corresponding minimization problem.
• Computer Science
SIAM J. Optim.
• 2021
The proposed algorithm outperforms or matches the performance of several recently proposed schemes while, arguably, being more transparent, easier to implement, and converging with respect to a stronger criterion.
• Computer Science, Economics
NeurIPS
• 2020
It is shown that proximal gradient (a generalization of projected gradient) with a practical version of linesearch achieves linear convergence under the Proximal-PL condition, andumerical experiments show that proportional response is highly efficient for computing an approximate solution, while projected gradient with linesearch can be much faster when higher accuracy is required.