Toward Policy Explanations for Multi-Agent Reinforcement Learning

  title={Toward Policy Explanations for Multi-Agent Reinforcement Learning},
  author={Kayla Boggess and Sarit Kraus and Lu Feng},
Advances in multi-agent reinforcement learning (MARL) enable sequential decision making for a range of exciting multi-agent applications such as cooperative AI and autonomous driving. Explaining agent decisions is crucial for improving system transparency, increasing user satisfaction, and facilitating human-agent collaboration. However, existing works on explainable reinforcement learning mostly focus on the single-agent setting and are not suitable for addressing challenges posed by multi… 

Figures and Tables from this paper


Metrics for Explainable AI: Challenges and Prospects
This paper discusses specific methods for evaluating the goodness of explanations, whether users are satisfied by explanations, how well users understand the AI systems, and how the human-XAI work system performs.
Explainable AI and Reinforcement Learning—A Systematic Review of Current Approaches and Trends
This review looks to explore current approaches and limitations for XAI in the area of Reinforcement Learning (RL), highlighting visualization, query-based explanations, policy summarization, human-in-the-loop collaboration, and verification as trends in this area.
Open Problems in Cooperative AI
This research integrates ongoing work on multi-agent systems, game theory and social choice, human-machine interaction and alignment, natural-language processing, and the construction of social tools and platforms into Cooperative AI, which is an independent bet about the productivity of specific kinds of conversations that involve these and other areas.
Benchmarking Multi-Agent Deep Reinforcement Learning Algorithms on a Building Energy Demand Coordination Task
An empirical comparison of three classes of MA-RL algorithms: independent learners, centralized critics with decentralized execution, and value factorization learners is contributed to evaluate these algorithms on an energy coordination task in CityLearn, an Open AI Gym environment.
Benchmarking Multi-Agent Deep Reinforcement Learning Algorithms in Cooperative Tasks
This work provides a systematic evaluation and comparison of three different classes of MARL algorithms in a diverse range of cooperative multi-agent learning tasks, and opens-source EPyMARL, which extends the PyMARL codebase to include additional algorithms and allow for flexible configuration of algorithm implementation details.
Shared Experience Actor-Critic for Multi-Agent Reinforcement Learning
This work proposes a general method for efficient exploration by sharing experience amongst agents by applying experience sharing in an actor-critic framework and finds that it consistently outperforms two baselines and two state-of-the-art algorithms by learning in fewer steps and converging to higher returns.
Explainable Reinforcement Learning: A Survey
It is found that a) the majority of XRL methods function by mimicking and simplifying a complex model instead of designing an inherently simple one, and b) XRL (and XAI) methods often neglect to consider the human side of the equation, not taking into account research from related fields like psychology or philosophy.
The Emerging Landscape of Explainable Automated Planning & Decision Making
A comprehensive outline of the different threads of work in Explainable AI Planning (XAIP) is provided and it is hoped that the survey will provide guidance to new researchers in automated planning towards the role of explanations in the effective design of human-in-the-loop systems.
MARLeME: A Multi-Agent Reinforcement Learning Model Extraction Library
This work introduces MARLeME: a MARL model extraction library, designed to improve explainability of MARL systems by approximating them with symbolic models.