#### Filter Results:

- Full text PDF available (151)

#### Publication Year

1997

2017

- This year (10)
- Last 5 years (63)
- Last 10 years (101)

#### Publication Type

#### Co-author

#### Journals and Conferences

#### Data Set Used

#### Key Phrases

Learn More

- Michael H. Bowling, Manuela M. Veloso
- Artif. Intell.
- 2002

Learning to act in a multiagent environment is a difficult problem since the normal definition of an optimal policy no longer applies. The optimal policy at any moment depends on the policies of the other agents. This creates a situation of learning a moving target. Previous learning algorithms have one of two shortcomings depending on their approach. They… (More)

- Marc G. Bellemare, Yavar Naddaf, Joel Veness, Michael H. Bowling
- J. Artif. Intell. Res.
- 2013

In this article we introduce the Arcade Learning Environment (ALE): both a challenge problem and a platform and methodology for evaluating the development of general, domain-independent AI technology. ALE provides an interface to hundreds of Atari 2600 game environments, each one different, interesting, and designed to be a challenge for human players. ALE… (More)

- Michael H. Bowling
- NIPS
- 2004

Learning in a multiagent system is a challenging problem due to two key factors. First, if other agents are simultaneously learning then the environment is no longer stationary, thus undermining convergence guarantees. Second, learning is often susceptible to deception, where the other agents may be able to exploit a learner’s particular dynamics. In the… (More)

- Michael H. Bowling, Manuela M. Veloso
- IJCAI
- 2001

This paper investigates the problem of policy learning in multiagent environments using the stochastic game framework, which we briefly overview. We introduce two properties as desirable for a learning agent when in the presence of other learning agents, namely rationality and convergence. We examine existing reinforcement learning algorithms according to… (More)

- Umar Syed, Michael H. Bowling, Robert E. Schapire
- ICML
- 2008

In apprenticeship learning, the goal is to learn a policy in a Markov decision process that is at least as good as a policy demonstrated by an expert. The difficulty arises in that the MDP's true reward function is assumed to be unknown. We show how to frame apprenticeship learning as a linear programming problem, and show that using an off-the-shelf LP… (More)

- Darse Billings, Aaron Davidson, +5 authors Duane Szafron
- Computers and Games
- 2004

Building a high-performance poker-playing program is a challenging project. The best program to date, PsOpti, uses game theory to solve a simplified version of the game. Although the program plays reasonably well, it is oblivious to the opponent’s weaknesses and biases. Modeling the opponent to exploit predictability is critical to success at poker. This… (More)

We consider the problem of efficiently learning optimal control policies and value functions over large state spaces in an online setting in which estimates must be available after each interaction with the world. This paper develops an explicitly model-based approach extending the Dyna architecture to linear function approximation. Dynastyle planning… (More)

We present an efficient "sparse sampling" technique for approximating Bayes optimal decision making in reinforcement learning, addressing the well known exploration versus exploitation tradeoff. Our approach combines sparse sampling with Bayesian exploration to achieve improved decision making while controlling computational cost. The idea is to grow a… (More)

Sequential decision-making with multiple agents and imperfect information is commonly modeled as an extensive game. One efficient method for computing Nash equilibria in large, zero-sum, imperfect information games is counterfactual regret minimization (CFR). In the domain of poker, CFR has proven effective, particularly when using a domain-specific… (More)

In an adversarial multi-robot task, such as playing robot soccer, decisions for team and single robot behavior must be made quickly to take advantage of short-term fortuitous events when they occur. When no such opportunities exist, the team must execute sequences of coordinated action across team members that increases the likelihood of future… (More)