Intelligent Knowledge Distribution: Constrained-Action POMDPs for Resource-Aware Multi-Agent Communication

@article{Fowler2020IntelligentKD,
  title={Intelligent Knowledge Distribution: Constrained-Action POMDPs for Resource-Aware Multi-Agent Communication},
  author={Michael C. Fowler and Thomas Charles Clancy and Ryan K. Williams},
  journal={IEEE transactions on cybernetics},
  year={2020},
  volume={PP}
}
This article addresses a fundamental question of multiagent knowledge distribution: what information should be sent to whom and when with the limited resources available to each agent? Communication requirements for multiagent systems can be rather high when an accurate picture of the environment and the state of other agents must be maintained. To reduce the impact of multiagent coordination on networked systems, for example, power and bandwidth, this article introduces two concepts for the… Expand
Distributed Task Assignment in Multi-Robot Systems based on Information Utility
TLDR
This paper model the usefulness of the transferred state information by its information utility and use it for controlling the distribution of local state information and for updating the global state and compares its distributed, utility-based online task assignment with well-known centralized and auction-based methods. Expand
Distributed and Communication-Aware Coalition Formation and Task Assignment in Multi-Robot Systems
TLDR
The sensitivity of complex missions to failure-prone MRS communication is demonstrated and robust, effective, and communication-aware methods for coalition formation and task assignment are provided. Expand
Information Distribution in Multi-Robot Systems: Adapting to Varying Communication Conditions
TLDR
This work introduces the adaptive goodput constraint, which smoothly adapts to varying communication conditions and is suitable for long-term communication planning, where rapid changes are undesirable. Expand
Information Distribution in Multi-Robot Systems: Generic, Utility-Aware Optimization Middleware
TLDR
This work addresses the problem of what information is worth sending in a multi-robot system under generic constraints, e.g., limited throughput or energy, and introduces techniques to reduce the decision space of this problem to further improve the performance. Expand

References

SHOWING 1-10 OF 35 REFERENCES
Constrained-Action POMDPs for Multi-Agent Intelligent Knowledge Distribution
TLDR
The concept of action-based constraints on partially observable Markov decision processes, rewards based upon the value of information driven by Kullback-Leibler Divergence, and probabilistic constraint satisfaction through discrete optimization and Markov chain Monte Carlo analysis are introduced. Expand
Decentralized control of Partially Observable Markov Decision Processes using belief space macro-actions
TLDR
The proposed Dec-POSMDP formulation allows asynchronous decision-making by the robots, which is crucial in multi-robot domains, and an algorithm for solving this Dec- POSMDP which is much more scalable than previous methods since it can incorporate closed-loop belief space macro-actions in planning. Expand
Decision-theoretic planning under uncertainty with information rewards for active cooperative perception
TLDR
This work presents the POMDP with Information Rewards (POMDP-IR) modeling framework, which rewards an agent for reaching a certain level of belief regarding a state feature, and demonstrates their use for active cooperative perception scenarios. Expand
Reward shaping for valuing communications during multi-agent coordination
TLDR
This research presents a novel model of rational communication, that uses reward shaping to value communications, and employs this valuation in decentralised POMDP policy generation and an empirical evaluation of the benefits is presented in two domains. Expand
Finite-Horizon Markov Decision Processes with State Constraints
TLDR
This paper introduces a new approach for finding non-stationary randomized policies for finite-horizon CMDPs and proposes an efficient algorithm based on Linear Programming and duality theory, which gives the convex set of feasible policies and ensures that the expected total reward is above a computable lower-bound. Expand
An Iterative Algorithm for Solving Constrained Decentralized Markov Decision Processes
TLDR
This paper introduces the notion of Expected Opportunity Cost to better assess the influence of a local decision of an agent on the others and describes an iterative version of the algorithm to incrementally improve the policies of agents leading to higher quality solutions in some settings. Expand
An online algorithm for constrained POMDPs
  • A. Undurti, J. How
  • Computer Science
  • 2010 IEEE International Conference on Robotics and Automation
  • 2010
TLDR
This work proposes a new online algorithm that explicitly ensures constraint feasibility while remaining computationally tractable and demonstrates that the online algorithm generates policies comparable to an offline constrained POMDP algorithm. Expand
Concurrent Markov decision processes for robot team learning
TLDR
Through a heterogeneous team foraging case study, it is shown that the CMDP-based learning mechanisms reduce both simulation time and total agent learning effort. Expand
Bounded Policy Iteration for Decentralized POMDPs
TLDR
A bounded policy iteration algorithm for infinite-horizon decentralized POMDPs is presented, which uses a fixed amount of memory, and each iteration is guaranteed to produce a controller with value at least as high as the previous one for all possible initial state distributions. Expand
Decentralized multi-robot cooperation with auctioned POMDPs
TLDR
This paper proposes to decentralize multiagent Partially Observable Markov Decision Process (POMDPs) while maintaining cooperation between robots by using POMDP policy auctions by applying a decentralized data fusion method in order to efficiently maintain a joint belief state among the robots. Expand
...
1
2
3
4
...