Formal models and algorithms for decentralized decision making under uncertainty

@article{Seuken2007FormalMA,
  title={Formal models and algorithms for decentralized decision making under uncertainty},
  author={Sven Seuken and S. Zilberstein},
  journal={Autonomous Agents and Multi-Agent Systems},
  year={2007},
  volume={17},
  pages={190-250}
}
Over the last 5 years, the AI community has shown considerable interest in decentralized control of multiple decision makers or “agents” under uncertainty. This problem arises in many application domains, such as multi-robot coordination, manufacturing, information gathering, and load balancing. Such problems must be treated as decentralized decision problems because each agent may have different partial information about the other agents and about the state of the world. It has been shown that… Expand
Agent interactions in decentralized environments
TLDR
This thesis unifies a range of existing work, extending analysis to establish novel complexity results for some popular restricted-interaction models and identifies new analytical measures that apply to all Dec-POMDPs, whatever their structure. Expand
Planning with Multiple Agents
In this paper focus is set on models of decentralized cooperative multi-agent systems. These models are used in settings where multiple agents work together, possibly on different tasks to fulfill aExpand
Communication-Based Decomposition Mechanisms for Decentralized MDPs
TLDR
This paper develops the notion of communication-based mechanism that allows us to decompose a decentralized MDP into multiple single-agent problems, and presents a polynomial-time algorithm for the case in which individual agents perform goal-oriented behaviors between communications. Expand
Partially Synchronized DEC-MDPs in Dynamic Mechanism Design
TLDR
This paper combines for the first time the methods of dynamic mechanism design with techniques from decentralized decision making under uncertainty, aligning incentives so that agents truthfully report local state when accessible and choose to follow the prescribed "emergency policies" of the center. Expand
A Sufficient Statistic for Influence in Structured Multiagent Environments
Making decisions in complex environments is a key challenge in artificial intelligence (AI). Situations involving multiple decision makers are particularly complex, leading to computationExpand
Abstracting influences for efficient multiagent coordination under uncertainty
TLDR
This dissertation introduces a (transition-dependent decentralized POMDP) model that efficiently decomposes into local decision models with shared state features that is provably sufficient for optimal coordination, and develops influence-space search algorithms that, for problems with a low degree of influence, compute optimal solutions orders of magnitude faster than policy- space search. Expand
Scalable, MDP-based planning for multiple, cooperating agents
TLDR
Under the persistent search & track mission scenario with 3 agents, it is shown that while resulting performance is decreased nearly 20% compared with the centralized optimal solution, the problem size becomes linear in n, a very attractive feature when planning online for large multi-agent teams. Expand
Increasing scalability in algorithms for centralized and decentralized partially observable markov decision processes: efficient decision-making and coordination in uncertain environments
TLDR
Several solution methods based on utilizing domain structure, memory-bounded representations and sampling are developed to address some of the major bottlenecks for decision-making in real-world uncertain systems. Expand
Decentralized MDPs with sparse interactions
TLDR
A new decision-theoretic model for decentralized sparse-interaction multiagent systems, Dec-SIMDPs, is contributed that explicitly distinguishes the situations in which the agents in the team must coordinate from those in which they can act independently. Expand
Heuristic search for identical payoff Bayesian games
TLDR
A branch and bound algorithm that optimally solves identical payoff Bayesian games for coordinating teams of cooperative agents and shows a marked improvement over previous methods, obtaining speedups of up to 3 orders of magnitude for synthetic random games, and reaching 10 order of magnitude speedups for games in a DEC-POMDP context. Expand
...
1
2
3
4
5
...

References

SHOWING 1-10 OF 54 REFERENCES
Complexity analysis and optimal algorithms for decentralized decision making
Coordination of distributed entities is required for problems arising in many areas, including multi-robot systems, networking applications, e-commerce applications, and the control of autonomousExpand
Solving Transition Independent Decentralized Markov Decision Processes
TLDR
This work presents a novel algorithm for solving a specific class of decentralized MDPs in which the agents' transitions are independent, and lays the foundation for further work in this area on both exact and approximate algorithms. Expand
An Iterative Algorithm for Solving Constrained Decentralized Markov Decision Processes
TLDR
This paper introduces the notion of Expected Opportunity Cost to better assess the influence of a local decision of an agent on the others and describes an iterative version of the algorithm to incrementally improve the policies of agents leading to higher quality solutions in some settings. Expand
Optimizing information exchange in cooperative multi-agent systems
TLDR
This paper develops a decision-theoretic solution to centralized control of a cooperative multi-agent system, treating both standard actions and communication as explicit choices that the decision maker must consider, and presents an analytical model to evaluate the trade-off between the cost of communication and the value of the information received. Expand
Planning, Learning and Coordination in Multiagent Decision Processes
TLDR
The extent to which methods from single-agent planning and learning can be applied in multiagent settings is investigated and the decomposition of sequential decision processes so that coordination can be learned locally, at the level of individual states. Expand
Agent interaction in distributed POMDPs and its implications on complexity
TLDR
This work provides a condition that distinguishes between problems in NP and those strictly harder than NP, and shows that the class of problems that satisfy this condition is NP-complete. Expand
On the undecidability of probabilistic planning and related stochastic optimization problems
TLDR
A complexity analysis of planning under uncertainty is presented, showing the "probabilistic classical planning" problem to be formally undecidable and any problem statement where the agent operates over an infinite or indefinite time horizon, and has available only probabilistic information about the system's state. Expand
Decentralized Control of Cooperative Systems: Categorization and Complexity Analysis
TLDR
This paper identifies classes of decentralized control problems whose complexity ranges between NEXP and P, and distinguishes between three ways in which agents can exchange information: indirect communication, direct communication and sharing state features that are not controlled by the agents. Expand
Approximate solutions for partially observable stochastic games with common payoffs
TLDR
This work proposes an algorithm that approximates POSGs as a series of smaller, related Bayesian games, using heuristics such as QMDP to provide the future discounted value of actions, and results in policies that are locally optimal with respect to the selected heuristic. Expand
A Framework for Sequential Planning in Multi-Agent Settings
TLDR
This paper extends the framework of partially observable Markov decision processes (POMDPs) to multi-agent settings by incorporating the notion of agent models into the state space and expresses the agents' autonomy by postulating that their models are not directly manipulable or observable by other agents. Expand
...
1
2
3
4
5
...