• Publications
  • Influence
Mirror descent in saddle-point problems: Going the extra (gradient) mile
TLDR
This work analyzes the behavior of mirror descent in a class of non-monotone problems whose solutions coincide with those of a naturally associated variational inequality-a property which it is called coherence, and shows that optimistic mirror descent (OMD) converges in all coherent problems. Expand
The Unusual Effectiveness of Averaging in GAN Training
TLDR
It is shown that EMA converges to limit cycles around the equilibrium with vanishing amplitude as the discount parameter approaches one for simple bilinear games and also enhances the stability of general GAN training. Expand
First-order Methods Almost Always Avoid Saddle Points
TLDR
It is established that first-order methods avoid saddle points for almost all initializations, and neither access to second-order derivative information nor randomness beyond initialization is necessary to provably avoiding saddle points. Expand
Cycles in adversarial regularized learning
TLDR
It is shown that the system's behavior is Poincare recurrent, implying that almost every trajectory revisits any (arbitrarily small) neighborhood of its starting point infinitely often. Expand
First-order methods almost always avoid strict saddle points
TLDR
It is established that first-order methods avoid strict saddle points for almost all initializations, and neither access to second-order derivative information nor randomness beyond initialization is necessary to provably avoid strict Saddle points. Expand
Multiplicative updates outperform generic no-regret learning in congestion games: extended abstract
TLDR
The results show that natural learning behavior can avoid bad outcomes predicted by the price of anarchy in atomic congestion games such as the load-balancing game introduced by Koutsoupias and Papadimitriou, which has super-constant price of Anarchy and has correlated equilibria that are exponentially worse than any mixed Nash equilibrium. Expand
α-Rank: Multi-Agent Evaluation by Evolution
We introduce α-Rank, a principled evolutionary dynamics methodology, for the evaluation and ranking of agents in large-scale multi-agent interactions, grounded in a novel dynamical game-theoreticExpand
Gradient Descent Only Converges to Minimizers: Non-Isolated Critical Points and Invariant Regions
TLDR
It is proved that the set of initial conditions so that gradient descent converges to saddle points where f has at least one strictly negative eigenvalue has (Lebesgue) measure zero, even for cost functions f with non-isolated critical points, answering an open question in [12]. Expand
Multiplicative Weights Update in Zero-Sum Games
TLDR
If equilibria are indeed predictive even for the benchmark class of zero-sum games, agents in practice must deviate robustly from the axiomatic perspective of optimization driven dynamics as captured by MWU and variants and apply carefully tailored equilibrium-seeking behavioral dynamics. Expand
Gradient Descent Converges to Minimizers: The Case of Non-Isolated Critical Points
We prove that the set of initial conditions so that gradient descent converges to strict saddle points has (Lebesgue) measure zero, even for non-isolated critical points, answering an open questionExpand
...
1
2
3
4
5
...