Interaction-driven Markov games for decentralized multiagent planning under uncertainty

@inproceedings{Spaan2008InteractiondrivenMG,
  title={Interaction-driven Markov games for decentralized multiagent planning under uncertainty},
  author={Matthijs T. J. Spaan and Francisco S. Melo},
  booktitle={AAMAS},
  year={2008}
}
In this paper we propose interaction-driven Markov games (IDMGs), a new model for multiagent decision making under uncertainty. IDMGs aim at describing multiagent decision problems in which interaction among agents is a local phenomenon. To this purpose, we explicitly distinguish between situations in which agents should interact and situations in which they can afford to act independently. The agents are coupled through the joint rewards and joint transitions in the states in which they… Expand
Exploring Information Interactions in Decentralized Multiagent Coordination under Uncertainty
TLDR
This work focuses on problems in which the interactions between agents are spatio-temporal discrete, and proposes a practical decision model for decentralized multiagent coordination, DLI-MDPs, that both supports the decision-making under the state of none and restricted information interactions. Expand
A POMDP-based Model for Optimizing Communication in Multiagent Systems
In this paper we address the problem of planning in multiagent systems in which the interaction between the different agents is sparse and mediated by communication. We include the process ofExpand
Collective Decision under Partial Observability - A Dynamic Local Interaction Model
TLDR
DyLIM deals with local interactions amongst the agents, and build the collective behavior from individual ones, and it is shown how this approach derive near optimal policies, for problems involving a large number of agents. Expand
Approximate planning for decentralized MDPs with sparse interactions
TLDR
This work sitsuate this class of problems within different multiagent models, such as MMDPs and transition independent Dec-MDPs, and contributes new algorithm for efficient planning in thisclass of problems. Expand
Decentralized MDPs with sparse interactions
TLDR
A new decision-theoretic model for decentralized sparse-interaction multiagent systems, Dec-SIMDPs, is contributed that explicitly distinguishes the situations in which the agents in the team must coordinate from those in which they can act independently. Expand
Heuristic Planning for Decentralized MDPs with Sparse Interactions
TLDR
This work explores how local interactions can simplify the process of decision-making in multiagent systems, particularly in multirobot problems, and contributes a new general approach that leverages the particular structure of Dec-SIMDPs to efficiently plan in this class of problems. Expand
Local Multiagent Coordination in Decentralized MDPs with Sparse Interactions
Creating coordinated multiagent policies in environments with uncertainty is a challenging problem, which can be greatly simplified if the coordination needs are known to be limited to specific partsExpand
Collective Multiagent Sequential Decision Making Under Uncertainty
TLDR
This work develops a collective decentralized MDP model where policies can be computed based on counts of agents in different states and develops a sampling based framework that can compute open and closed loop policies. Expand
Agent interactions in decentralized environments
TLDR
This thesis unifies a range of existing work, extending analysis to establish novel complexity results for some popular restricted-interaction models and identifies new analytical measures that apply to all Dec-POMDPs, whatever their structure. Expand
Exploiting Sparse Interactions for Optimizing Communication in Dec-MDPs
TLDR
The experimental results show that the approach successfully exploits sparse interactions: the approach can effectively identify the situations in which it is beneficial to communicate, as well as trade off the cost of communication with overall task performance. Expand
...
1
2
3
4
5
...

References

SHOWING 1-10 OF 21 REFERENCES
Solving Transition Independent Decentralized Markov Decision Processes
TLDR
This work presents a novel algorithm for solving a specific class of decentralized MDPs in which the agents' transitions are independent, and lays the foundation for further work in this area on both exact and approximate algorithms. Expand
Planning, Learning and Coordination in Multiagent Decision Processes
TLDR
The extent to which methods from single-agent planning and learning can be applied in multiagent settings is investigated and the decomposition of sequential decision processes so that coordination can be learned locally, at the level of individual states. Expand
Approximate solutions for partially observable stochastic games with common payoffs
TLDR
This work proposes an algorithm that approximates POSGs as a series of smaller, related Bayesian games, using heuristics such as QMDP to provide the future discounted value of actions, and results in policies that are locally optimal with respect to the selected heuristic. Expand
Exploiting locality of interaction in factored Dec-POMDPs
TLDR
Together, the results allow the problem structure as well as heuristics in a single framework that is based on collaborative graphical Bayesian games (CGBGs) and a preliminary experiment shows a speedup of two orders of magnitude. Expand
Exploiting factored representations for decentralized execution in multiagent teams
TLDR
This paper explores how factored representations of state can be used to generate factored policies that can, with minimal communication, be executed distributedly by a multiagent team. Expand
Decentralized Communication Strategies for Coordinated Multi-Agent Policies
TLDR
This work presents a novel approach for using centralized “single-agent” policies in decentralized multi-agent systems by maintaining and reasoning over the possible joint beliefs of the team by reducing communication while improving the performance of distributed xecution. Expand
A Framework for Sequential Planning in Multi-Agent Settings
TLDR
This paper extends the framework of partially observable Markov decision processes (POMDPs) to multi-agent settings by incorporating the notion of agent models into the state space and expresses the agents' autonomy by postulating that their models are not directly manipulable or observable by other agents. Expand
Utile Coordination: Learning Interdependencies Among Cooperative Agents
TLDR
Utile Coordination is described, an algorithm that allows a multiagent system to learn where and how to coordinate and applies within the framework of coordination graphs in which value rules represent the coordination dependencies between the agents for a specific context. Expand
Networked Distributed POMDPs: A Synergy of Distributed Constraint Optimization and POMDPs
TLDR
Exploiting network structure enables us to present two novel algorithms for ND-POMDPs: a distributed policy generation algorithm that performs local search and a systematic policy search that is guaranteed to reach the global optimal. Expand
The Communicative Multiagent Team Decision Problem: Analyzing Teamwork Theories and Models
TLDR
A unified framework for multiagent teamwork, the COMmunicative Multiagent Team Decision Problem (COM-MTDP), which combines and extends existing multiagent theories, and provides a basis for the development of novel team coordination algorithms. Expand
...
1
2
3
...