Solving Transition Independent Decentralized Markov Decision Processes

  title={Solving Transition Independent Decentralized Markov Decision Processes},
  author={Raphen Becker and Shlomo Zilberstein and Victor R. Lesser and Claudia V. Goldman},
  journal={J. Artif. Intell. Res.},
Formal treatment of collaborative multi-agent systems has been lagging behind the rapid progress in sequential decision making by individual agents. Recent work in the area of decentralized Markov Decision Processes (MDPs) has contributed to closing this gap, but the computational complexity of these models remains a serious obstacle. To overcome this complexity barrier, we identify a specific class of decentralized MDPs in which the agents' transitions are independent. The class consists of… 

A polynomial algorithm for decentralized Markov decision processes with temporal constraints

This paper presents a class of Decentralized MDPs, OC-DEC-MDP, that can handle temporal and precedence constraints, and introduces an opportunity cost to allow the agents to coordinate.

Decentralized planning under uncertainty for teams of communicating agents

This work explores iterative methods for approximately solving decentralized Markov decision processes, and model communication as an integral part of the agent's reasoning, in which the meaning of a message is directly encoded in the policy of the communicating agent.

Mixed-integer linear programming for transition-independent decentralized MDPs

A class of problems studied in this paper is a subclass of Dec-MDP in which two or more cooperative agents are tied together through the rewards of completing joint tasks but the actions taken by one agent do not impact other agents' transitions.

Solving efficiently Decentralized MDPs with temporal and resource constraints

A new model is presented that allows for large multi-agent decision problems with temporal and precedence constraints to be represented and polynomial algorithms to efficiently solve problems formalized by OC-DEC-MDPs are proposed.

Relaxation for Constrained Decentralized Markov Decision Processes: (Extended Abstract)

  • J. Xu
  • Computer Science
  • 2016
This paper forms this problem as an infinite time horizon Decentralized Markov Decision Process (DEC-MDP) with resource constraints and develops efficient approximate algorithms that allow decentralized computation of the agent policy based on Lagrangian relaxation.

Planning in stochastic domains for multiple agents with individual continuous resource state-spaces

An approximation method is proposed that solves a class of Decentralized hybrid Markov Decision Processes (DEC-HMDPs). These DEC-HMDPs have both discrete and continuous state variables and represent

Producing efficient error-bounded solutions for transition independent decentralized mdps

This paper presents the first approach for solving transition independent decentralized Markov decision processes (Dec-MDPs), that inherits error-bounds and fast convergence rates and provides the foundation for the first algorithm for solving infinite-horizon transitionindependent decentralized MDPs.

Decentralized control of partially observable Markov decision processes

This paper surveys recent work on decentralized control of MDPs in which control of each agent depends on a partial view of the world, represented as a decentralized partially observable MDP (Dec-POMDP), but consider a number of subclasses with different assumptions about uncertainty and agent independence.

Communication-Based Decomposition Mechanisms for Decentralized MDPs

This paper develops the notion of communication-based mechanism that allows us to decompose a decentralized MDP into multiple single-agent problems, and presents a polynomial-time algorithm for the case in which individual agents perform goal-oriented behaviors between communications.

Formal models and algorithms for decentralized decision making under uncertainty

Five different formal frameworks, three different optimal algorithms, as well as a series of approximation techniques are analyzed to provide interesting insights into the structure of decentralized problems, the expressiveness of the various models, and the relative advantages and limitations of the different solution techniques.



Transition-independent decentralized markov decision processes

A novel algorithm is presented that is the first effective technique to solve optimally a class of transition-independent decentralized MDPs and lays the foundation for further work in this area on both exact and approximate solutions.

Decentralized Markov decision processes with event-driven interactions

A class of DEC-MDPs that restricts the interactions between the agents to a structured, event-driven dependency, which can model locking a shared resource or temporal enabling constraints, both of which arise frequently in practice.

Context-specific multiagent coordination and planning with factored MDPs

An algorithm for coordinated decision making in cooperative multiagent settings, where the agents' value function can be represented as a sum of context-specific value rules using an efficient linear programming algorithm is presented.

Sequential Optimality and Coordination in Multiagent Systems

This work proposes an extension of value iteration in which the system's state space is augmented with the state of the coordination mechanism adopted, allowing agents to reason about the short and long term prospects for coordination, the long term consequences of (mis)coordination, and make decisions to engage or avoid coordination problems based on expected value.

A multiagent reinforcement learning algorithm by dynamically merging markov decision processes

A new learning algorithm called MAPLE (MultiAgent Policy LEarning) is presented that uses Q-learning and dynamic merging to efficiently construct global solutions to the overall multiagent problem from solutions to multiple Markov decision processes.

Optimizing information exchange in cooperative multi-agent systems

This paper develops a decision-theoretic solution to centralized control of a cooperative multi-agent system, treating both standard actions and communication as explicit choices that the decision maker must consider, and presents an analytical model to evaluate the trade-off between the cost of communication and the value of the information received.

Communication decisions in multi-agent cooperation: model and experiments

A multi-agent extension to Markov decision processes in which communication can be modeled as an explicit action that incurs a cost is presented, which provides a foundation for a quantified study of agent coordination policies and provides both motivation and insight to the design of heuristic approaches.

Decentralized control of finite state Markov processes

  • Kai HsuS. Marcus
  • Mathematics
    1980 19th IEEE Conference on Decision and Control including the Symposium on Adaptive Processes
  • 1980
We are concerned with the control of a particular class of dynamic systems -- finite state Markov chains. The information pattern available is the so-called one step delay sharing information

The Complexity of Decentralized Control of Markov Decision Processes

This work considers decentralized control of Markov decision processes and gives complexity bounds on the worst-case running time for algorithms that find optimal solutions and describes generalizations that allow for decentralized control.

The Communicative Multiagent Team Decision Problem: Analyzing Teamwork Theories and Models

A unified framework for multiagent teamwork, the COMmunicative Multiagent Team Decision Problem (COM-MTDP), which combines and extends existing multiagent theories, and provides a basis for the development of novel team coordination algorithms.