Superhuman AI for multiplayer poker

@article{Brown2019SuperhumanAF,
  title={Superhuman AI for multiplayer poker},
  author={Noam Brown and Tuomas Sandholm},
  journal={Science},
  year={2019},
  volume={365},
  pages={885 - 890}
}
AI now masters six-player poker Computer programs have shown superiority over humans in two-player games such as chess, Go, and heads-up, no-limit Texas hold'em poker. However, poker games usually include six players—a much trickier challenge for artificial intelligence than the two-player variant. Brown and Sandholm developed a program, dubbed Pluribus, that learned how to play six-player no-limit Texas hold'em by playing against five copies of itself (see the Perspective by Blair and… 
AI surpasses humans at six-player poker
TLDR
A new computer player called Pluribus exceeds human performance for six-player Texas hold'em poker, and may lead to algorithms with wider applicability.
Exploiting Opponents Under Utility Constraints in Sequential Games
TLDR
This paper addresses the problem of designing artificial agents that learn how to effectively exploit unknown human opponents while playing repeatedly against them in an online fashion and formalizes a set of linear inequalities encoding the conditions that the agent’s strategy must satisfy at each iteration in order to do not violate the given bounds for the human‘s expected utility.
From Chess and Atari to StarCraft and Beyond: How Game AI is Driving the World of AI
TLDR
The algorithms and methods that have paved the way for these breakthroughs are reviewed, including that advances in Game AI are starting to be extended to areas outside of games, such as robotics or the synthesis of chemicals.
AI in Games: Techniques, Challenges and Opportunities
TLDR
A survey of recent successful game AI systems, covering board game AIs, card game A is, first-person shooting game A Is and real time strategy game AIS, to compare the main difficulties among different kinds of games for the intelligent decision making field.
Creating Pro-Level AI for Real-Time Fighting Game with Deep Reinforcement Learning
TLDR
A practical reinforcement learning method that includes a novel self-play curriculum and data skipping techniques that could increase data efficiency and facilitate explorations in vast spaces is presented.
Your Buddy, the Grandmaster: Repurposing the Game-Playing AI Surplus for Inclusivity
TLDR
It is claimed that utilizing GPAI agents to help players overcome barriers is a productive way of repurposing the capabilities of these agents and contributes to the creation of more inclusive games.
A Survey of Planning and Learning in Games
TLDR
This paper presents a survey of the multiple methodologies proposed to integrate planning and learning in the context of games, both in terms of their theoretical foundations and applications and also presents learning and planning techniques commonly used in games.
No-Press Diplomacy from Scratch
TLDR
An algorithm for action exploration and equilibrium approximation in games with combinatorial action spaces and evidence that this agent plays a strategy that is incompatible with human-data bootstrapped agents is presented, suggesting that self play alone may be insufficient for achieving superhuman performance in Diplomacy.
Creating Pro-Level AI for a Real-Time Fighting Game Using Deep Reinforcement Learning
TLDR
A practical reinforcement learning method that includes a novel self-play curriculum and data skip-ping techniques that could increase data efficiency and facilitate explorations in vast spaces is presented.
ScrofaZero: Mastering Trick-taking Poker Game Gongzhu by Deep Reinforcement Learning
TLDR
This work trains a strong Gongzhu AI ScrofaZero from tabula rasa by deep reinforcement learning, while few previous efforts on solving trick-taking poker game utilize the representation power of neural networks.
...
1
2
3
4
5
...

References

SHOWING 1-10 OF 33 REFERENCES
Superhuman AI for heads-up no-limit poker: Libratus beats top professionals
TLDR
Libratus, an AI that, in a 120,000-hand competition, defeated four top human specialist professionals in heads-up no-limit Texas hold’em, the leading benchmark and long-standing challenge problem in imperfect-information game solving is presented.
DeepStack: Expert-level artificial intelligence in heads-up no-limit poker
TLDR
DeepStack is introduced, an algorithm for imperfect-information settings that combines recursive reasoning to handle information asymmetry, decomposition to focus computation on the relevant decision, and a form of intuition that is automatically learned from self-play using deep learning.
Heads-up limit hold’em poker is solved
TLDR
It is announced that heads-up limit Texas hold’em is now essentially weakly solved, and this computation formally proves the common wisdom that the dealer in the game holds a substantial advantage.
DeepStack: Expert-Level Artificial Intelligence in No-Limit Poker
TLDR
DeepStack becomes the first computer program to beat professional poker players in heads-up no-limit Texas hold’em and dramatically reduces worst-case exploitability compared to the abstraction paradigm that has been favored for over a decade.
Mastering the game of Go without human knowledge
TLDR
An algorithm based solely on reinforcement learning is introduced, without human data, guidance or domain knowledge beyond game rules, that achieves superhuman performance, winning 100–0 against the previously published, champion-defeating AlphaGo.
Robust Strategies and Counter-Strategies: From Superhuman to Optimal Play
TLDR
The papers presented in this thesis encompass the complete end-to-end task of creating strong agents for extremely large games by using the Abstraction-Solving-Translation procedure, and a body of research that has made contributions to each step of this task is presented.
The challenge of poker
Solving imperfect-information games
TLDR
A strategy for two-player limit Texas Hold'em poker is computed that is so close to optimal that, at the pace a human plays poker, it cannot be beaten with statistical significance in a lifetime.
One jump ahead - challenging human supremacy in checkers
TLDR
This extraordinary book tells the story of the creation of the world champion checkers computer program, Chinook, from its beginnings in 1988 to the final match against the then world champion, Marion Tinsley in 1992.
Mastering the game of Go with deep neural networks and tree search
TLDR
Using this search algorithm, the program AlphaGo achieved a 99.8% winning rate against other Go programs, and defeated the human European Go champion by 5 games to 0.5, the first time that a computer program has defeated a human professional player in the full-sized game of Go.
...
1
2
3
4
...