• Publications
  • Influence
Mirror descent in saddle-point problems: Going the extra (gradient) mile
TLDR
This work analyzes the behavior of mirror descent in a class of non-monotone problems whose solutions coincide with those of a naturally associated variational inequality-a property which it is called coherence, and shows that optimistic mirror descent (OMD) converges in all coherent problems. Expand
Cycles in adversarial regularized learning
TLDR
It is shown that the system's behavior is Poincare recurrent, implying that almost every trajectory revisits any (arbitrarily small) neighborhood of its starting point infinitely often. Expand
Learning in games with continuous action sets and unknown payoff functions
TLDR
This paper focuses on learning via “dual averaging”, a widely used class of no-regret learning schemes where players take small steps along their individual payoff gradients and then “mirror” the output back to their action sets, and introduces the notion of variational stability. Expand
Penalty-Regulated Dynamics and Robust Learning Procedures in Games
TLDR
A new class of continuous-time learning dynamics consisting of a replicator-like drift adjusted by a penalty term that renders the boundary of the game’s strategy space repelling unchanged and a discrete-time, payoff-based learning algorithm that retains these convergence properties and only requires players to observe their in-game payoffs is designed. Expand
Learning in Games via Reinforcement and Regularization
TLDR
This paper extends several properties of exponential learning, including the elimination of dominated strategies, the asymptotic stability of strict Nash equilibria, and the convergence of time-averaged trajectories in zero-sum games with an interior Nash equilibrium. Expand
On the convergence of single-call stochastic extra-gradient methods
TLDR
A synthetic view of Extra-Gradient algorithms is developed, and it is shown that they retain a $\mathcal{O}(1/t)$ ergodic convergence rate in smooth, deterministic problems. Expand
Stochastic Mirror Descent in Variationally Coherent Optimization Problems
TLDR
This paper focuses on the widely used stochastic mirror descent (SMD) family of algorithms, and shows that the last iterate of SMD converges to the problem’s solution set with probability 1, contributing to the landscape of non-convex Stochastic optimization by clarifying that neither pseudo-/quasiconvexity nor star-concexity is essential for (almost sure) global convergence. Expand
Higher order game dynamics
TLDR
This paper shows that strictly dominated strategies become extinct in n-th order payoff-monotonic dynamics n orders as fast as in the corresponding first order dynamics; furthermore, in stark contrast to first order, weakly dominated strategies also become extinct for n⩾2. Expand
Explore Aggressively, Update Conservatively: Stochastic Extragradient Methods with Variable Stepsize Scaling
TLDR
This paper investigates a double stepsize extragradient algorithm where the exploration step evolves at a more aggressive time-scale compared to the update step, and shows that this modification allows the method to converge even with stochastic gradients, and derive sharp convergence rates under an error bound condition. Expand
A Resource Allocation Framework for Network Slicing
TLDR
This paper proposes a novel optimization framework which allows fine-grained resource allocation for slices both in terms of network bandwidth and cloud processing and demonstrates the method's fast convergence in a wide range of quasi-stationary and dynamic settings. Expand
...
1
2
3
4
5
...