Lyapunov Exponents for Diversity in Differentiable Games

@inproceedings{Lorraine2022LyapunovEF,
  title={Lyapunov Exponents for Diversity in Differentiable Games},
  author={Jonathan Lorraine and Paul Vicol and Jack Parker-Holder and Tal Kachman and Luke Metz and Jakob N. Foerster},
  booktitle={AAMAS},
  year={2022}
}
Ridge Rider (RR) is an algorithm for finding diverse solutions to optimization problems by following eigenvectors of the Hessian (“ridges”). RR is designed for conservative gradient systems (i.e., settings involving a single loss function), where it branches at saddles — easy-to-find bifurcation points. We generalize this idea to nonconservative, multi-agent gradient systems by proposing a method – denoted Generalized Ridge Rider (GRR) – for finding arbitrary bifurcation points. We give… 
1 Citations
Domain Adversarial Training: A Game Perspective
TLDR
This paper shows that gradient descent in domain-adversarial training can violate the asymptotic convergence guarantees of the optimizer, oftentimes hindering the transfer performance, and leads to a new family of optimizers, which is significantly more stable and allows more aggressive learning rates, leading to high performance gains when used as a drop-in replacement over standard optimizers.

References

SHOWING 1-10 OF 121 REFERENCES
Using Bifurcations for Diversity in Differentiable Games
TLDR
This work generalizes Ridge Rider to non-conservative, multi-agent gradient systems by identifying new types of bifurcation points and proposing a method to follow eigenvectors with complex eigenvalues.
Stable Opponent Shaping in Differentiable Games
TLDR
Stable Opponent Shaping (SOS) is presented, a new method that interpolates between LOLA and a stable variant named LookAhead that converges locally to equilibria and avoids strict saddles in all differentiable games.
The Mechanics of n-Player Differentiable Games
TLDR
The key result is to decompose the second-order dynamics into two components, related to potential games, which reduce to gradient descent on an implicit function; the second relates to Hamiltonian games, a new class of games that obey a conservation law, akin to conservation laws in classical mechanical systems.
Average Case Performance of Replicator Dynamics in Potential Games via Computing Regions of Attraction
TLDR
This average case analysis is shown to offer novel insights in classic game theoretic challenges, including quantifying the risk dominance in stag-hunt games and allowing for more nuanced performance analysis in networked coordination and congestion games with large gaps between price of stability and price of anarchy.
Vortices Instead of Equilibria in MinMax Optimization: Chaos and Butterfly Effects of Online Learning in Zero-Sum Games
TLDR
It is proved that no meaningful prediction can be made about the day-to-day behavior of online learning dynamics in zero-sum games, and Chaos is robust to all affine variants of zero- sum games, network variants with arbitrary large number of agents and even to competitive settings beyond these.
Differentiable Game Mechanics
TLDR
New tools to understand and control the dynamics in n-player differentiable games are developed and basic experiments show SGA is competitive with recently proposed algorithms for finding stable fixed points in GANs -- while at the same time being applicable to, and having guarantees in, much more general cases.
A Tight and Unified Analysis of Gradient-Based Methods for a Whole Spectrum of Differentiable Games
TLDR
A tight analysis of EG’s convergence rate in games shows that, unlike in convex minimization, EG may be much faster than GD, and it is proved that EG achieves the optimal rate for a wide class of algorithms with any number of extrapolations.
Optimization, Learning, and Games with Predictable Sequences
TLDR
It is proved that a version of Optimistic Mirror Descent can be used by two strongly-uncoupled players in a finite zero-sum matrix game to converge to the minimax equilibrium at the rate of O((log T)/T).
Ridge Rider: Finding Diverse Solutions by Following Eigenvectors of the Hessian
TLDR
Ridge Rider offers a promising direction for a variety of challenging problems by iteratively following and branching amongst the ridges of the Hessian, and effectively span the loss surface to find qualitatively different solutions.
Chaos of Learning Beyond Zero-sum and Coordination via Game Decompositions
TLDR
A notion of ''matrix domination'' and design a linear program is proposed, and used to characterize bimatrix games where MWU is Lyapunov chaotic almost everywhere, indicating that chaos is a substantial issue of learning in games.
...
1
2
3
4
5
...