• Corpus ID: 226299673

Stability of Gradient Learning Dynamics in Continuous Games: Vector Action Spaces

  title={Stability of Gradient Learning Dynamics in Continuous Games: Vector Action Spaces},
  author={Benjamin J. Chasnov and Daniel J. Calderone and Behçet Açikmese and Samuel A. Burden and Lillian J. Ratliff},
Towards characterizing the optimization landscape of games, this paper analyzes the stability and spectrum of gradient-based dynamics near fixed points of two-player continuous games. We introduce the quadratic numerical range as a method to bound the spectrum of game dynamics linearized about local equilibria. We also analyze the stability of differential Nash equilibria and their robustness to variation in agent’s learning rates. Our results show that by decomposing the game Jacobian into… 

Figures from this paper


Stability of Gradient Learning Dynamics in Continuous Games: Scalar Action Spaces
A natural model of learning based on individual gradients in two- player continuous games is studied and it is found that equilibria that are both stable and Nash are robust to variations in learning rates.
Learning in games with continuous action sets and unknown payoff functions
This paper focuses on learning via “dual averaging”, a widely used class of no-regret learning schemes where players take small steps along their individual payoff gradients and then “mirror” the output back to their action sets, and introduces the notion of variational stability.
Implicit Learning Dynamics in Stackelberg Games: Equilibria Characterization, Convergence Analysis, and Empirical Study
This work provides insights into the optimization landscape of zero-sum games by establishing connections between Nash and Stackelberg equilibria along with the limit points of simultaneous gradient descent and derive novel gradient-based learning dynamics emulating the natural structure of a StACkelberg game using the implicit function theorem.
On Gradient-Based Learning in Continuous Games
A general framework for competitive gradient-based learning is introduced that allows for a wide breadth of learning algorithms including policy gradient reinforcement learning, gradient based bandits, and certain online convex optimization algorithms to be analyzed.
Convergence Analysis of Gradient-Based Learning in Continuous Games
Considering a class of gradient-based multiagent learning algorithms in non-cooperative settings, we provide convergence guarantees to a neighborhood of a stable Nash equilibrium. In particular, we
Characterization and computation of local Nash equilibria in continuous games
Drawing on this analogy, an iterative steepest descent algorithm is proposed for numerical approximation of local Nash equilibria and a sufficient condition ensuring local convergence of the algorithm is provided.
From Darwin to Poincaré and von Neumann: Recurrence and Cycles in Evolutionary and Algorithmic Game Theory
It is proved that, if and only if, the system has an interior Nash equilibrium, the dynamics exhibit Poincare recurrence, i.e., almost all orbits come arbitrary close to their initial conditions infinitely often, and two degrees of freedom is sufficient to prove periodicity.
Genericity and structural stability of non-degenerate differential Nash equilibria
It is demonstrated that equilibria that are computable using decoupled myopic approximate best-response persist under perturbations to the cost functions of individual players, implying that second-order conditions suffice to characterize local NashEquilibria in an open-dense set of games where player costs are smooth functions.
On the Characterization of Local Nash Equilibria in Continuous Games
A unified framework for characterizing local Nash equilibria in continuous games on either infinite-dimensional or finite-dimensional non-convex strategy spaces is presented and a sufficient condition (non-degeneracy) guaranteeing differential NashEquilibria are isolated and show that such equilibrian are structurally stable is provided.
Global Convergence of Policy Gradient for Sequential Zero-Sum Linear Quadratic Dynamic Games
These policy gradient based algorithms are akin to Stackelberg leadership model and can be extended to model-free settings and show that if the leader performs natural gradient descent/ascent, then the proposed algorithm has a global sublinear convergence to the Nash equilibrium.