• Corpus ID: 238531271

Nash Convergence of Mean-Based Learning Algorithms in First Price Auctions

@article{Deng2021NashCO,
  title={Nash Convergence of Mean-Based Learning Algorithms in First Price Auctions},
  author={Xiaotie Deng and Xinyan Hu and Tao Lin and Weiqiang Zheng},
  journal={ArXiv},
  year={2021},
  volume={abs/2110.03906}
}
We consider repeated first price auctions where each bidder, having a deterministic type, learns to bid using a mean-based learning algorithm. We completely characterize the Nash convergence property of the bidding dynamics in two senses: (1) time-average: the fraction of rounds where bidders play a Nash equilibrium approaches to 1 in the limit; (2) last-iterate: the mixed strategy profile of bidders approaches to a Nash equilibrium in the limit. Specifically, the results depend on the number… 

Figures from this paper

Auctions Between Regret-Minimizing Agents
TLDR
This work analyzes a scenario in which software agents implemented as regret-minimizing algorithms engage in a repeated auction on behalf of their users and shows that, surprisingly, in second price auctions the players have incentives to mis-report their true valuations to their own learning agents.

References

SHOWING 1-10 OF 45 REFERENCES
Convergence Analysis of No-Regret Bidding Algorithms in Repeated Auctions
TLDR
The convergence of no-regret bidding algorithms in auctions is studied and it is shown that if the bidders use any mean-based learning rule then they converge with high probability to the truthful pure Nash Equilibrium in a second price auction, in VCG auction in the multi-slot setting and to the Bayesian Nash equilibrium in a first price auction.
Selling to a No-Regret Buyer
TLDR
This work provides a fairly complete characterization of optimal auctions for the seller in this domain and suggests the seller's optimal achievable revenue is characterized by a linear program, and can be unboundedly better than the best truthful auction yet simultaneously unboundingly worse than the expected welfare.
On Learning Algorithms for Nash Equilibria
TLDR
This work revisits a 3×3 game defined by Shapley in the 1950s in order to establish that fictitious play does not converge in general games and shows via a potential function argument that in a variety of settings the multiplicative updates algorithm impressively fails to find the unique Nash equilibrium.
Prediction, learning, and games
TLDR
This chapter discusses prediction with expert advice, efficient forecasters for large classes of experts, and randomized prediction for specific losses.
The Theory of Learning in Games
In economics, most noncooperative game theory has focused on equilibrium in games, especially Nash equilibrium and its refinements. The traditional explanation for when and why equilibrium arises is
Auctions Between Regret-Minimizing Agents
TLDR
This work analyzes a scenario in which software agents implemented as regret-minimizing algorithms engage in a repeated auction on behalf of their users and shows that, surprisingly, in second price auctions the players have incentives to mis-report their true valuations to their own learning agents.
Learning New Auction Format by Bidders in Internet Display Ad Auctions
TLDR
This work constitutes one of the first field studies on bidders’ responses to auction format changes, providing an important complement to theoretical model predictions, and provides valuable information to auction designers when considering the implementation of different formats.
Learning equilibria in symmetric auction games using artificial neural networks
TLDR
The method follows the simultaneous gradient of the game and uses a smoothing technique to circumvent discontinuities in the ex-post utility functions of auction games to provably learn local equilibria in such auction games.
Learning to Bid in Contextual First Price Auctions
TLDR
A lower bound result is provided such that any bidding policy in a broad class must achieve regret at least Ω( √ T ), even when the learner receives the full information feedback and F is known.
Linear Last-iterate Convergence in Constrained Saddle-point Optimization
TLDR
This work significantly expands the understanding of last-iterate convergence for OGDA and OMWU in the constrained setting and introduces a sufficient condition under which OGDA exhibits concrete last- iterate convergence rates with a constant learning rate, which holds for strongly-convex-strongly-concave functions.
...
1
2
3
4
5
...