#### Filter Results:

#### Publication Year

2011

2016

#### Publication Type

#### Co-author

#### Publication Venue

#### Key Phrases

Learn More

While many multiagent algorithms are designed for homogeneous systems (i.e. all agents are identical), there are important applications which require an agent to coordinate its actions without knowing a priori how the other agents behave. One method to make this problem feasible is to assume that the other agents draw their latent policy (or type) from a… (More)

This paper is concerned with evaluating different multiagent learning (MAL) algorithms in problems where individual agents may be heterogenous, in the sense of utilizing different learning strategies, without the opportunity for prior agreements or information regarding coordination. Such a situation arises in ad hoc team problems, a model of many practical… (More)

The ad hoc coordination problem is to design an autonomous agent which is able to achieve optimal flexibility and efficiency in a multi-agent system with no mechanisms for prior coordination. We conceptualise this problem formally using a game-theoretic model, called the stochastic Bayesian game, in which the behaviour of a player is determined by its… (More)

The ad hoc coordination problem is to design an ad hoc agent which is able to achieve optimal flexibility and efficiency in a multiagent system that admits no prior coordination between the ad hoc agent and the other agents. We conceptualise this problem formally as a stochastic Bayesian game in which the behaviour of a player is determined by its type.… (More)

The key for effective interaction in many multia-gent applications is to reason explicitly about the behaviour of other agents, in the form of a hypothe-sised behaviour. While there exist several methods for the construction of a behavioural hypothesis, there is currently no universal theory which would allow an agent to contemplate the correctness of a… (More)

There is a long history in game theory on the topic of Bayesian or " rational " learning, in which each player maintains beliefs over a set of alternative behaviours, or types, for the other players. This idea has gained increasing interest in the artificial intelligence (AI) community, where it is used as a method to control a single agent in a system… (More)

Many multiagent applications require an agent to learn quickly how to interact with previously unknown other agents. To address this problem, researchers have studied learning algorithms which compute posterior beliefs over a hypothesised set of policies, based on the observed actions of the other agents. The posterior belief is complemented by the prior… (More)

This thesis is concerned with the ad hoc coordination problem , in which the goal is to design an autonomous agent which is able to achieve optimal flexibility and efficiency in a multiagent system with no mechanisms for prior behavioural coordination. The thesis is primarily motivated by human-machine interaction problems, which can often be formulated in… (More)

Past research has studied two approaches to utilise pre-defined policy sets in repeated interactions: as experts, to dictate our own actions, and as types, to characterise the behaviour of other agents. In this work, we bring these complementary views together in the form of a novel meta-algorithm, called Expert-HBA (E-HBA), which can be applied to any… (More)

This document is an appendix to [1]. It contains a listing of all parameter settings used in the study as well as plots for each experiment.