Joint Mind Modeling for Explanation Generation in Complex Human-Robot Collaborative Tasks

  title={Joint Mind Modeling for Explanation Generation in Complex Human-Robot Collaborative Tasks},
  author={Xiaofeng Gao and R. Gong and Yizhou Zhao and Shu Wang and Tianmin Shu and Song-Chun Zhu},
  journal={2020 29th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)},
  • Xiaofeng Gao, R. Gong, Song-Chun Zhu
  • Published 24 July 2020
  • Computer Science
  • 2020 29th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)
Human collaborators can effectively communicate with their partners to finish a common task by inferring each other’s mental states (e.g., goals, beliefs, and desires). Such mind-aware communication minimizes the discrepancy among collaborators’ mental states, and is crucial to the success in human ad-hoc teaming. We believe that robots collaborating with human users should demonstrate similar pedagogic behavior. Thus, in this paper, we propose a novel explainable AI (XAI) framework for… 

Figures from this paper

A Mental-Model Centric Landscape of Human-AI Symbiosis
A significantly general version of human-aware AI interaction scheme, called generalizedhuman-aware interaction (GHAI), that talks about (mental) models of six types that allows us to capture the various works done in the space ofhuman-AI interaction and identify the fundamental behavioral patterns supported by these works.
Explainable Goal-Driven Agents and Robots - A Comprehensive Review and New Framework
Works on explainable goal-driven intelligent agents and robots are reviewed, focusing on techniques for explaining and communicating agents perceptual functions and cognitive reasoning with humans in the loop.
Helping People Through Space and Time: Assistance as a Perspective on Human-Robot Interaction
As assistive robotics has expanded to many task domains, comparing assistive strategies among the varieties of research becomes increasingly difficult. To begin to unify the disparate domains into a
Two Many Cooks: Understanding Dynamic Human-Agent Team Communication and Perception Using Overcooked 2
It is argued that the increased cognitive workload associated with increases task load will be negatively associated with team performance and have a negative impact on communication quality and positive team perceptions will have a positive impact on the communication quality between a user and teammate in both the human and AI teammate conditions.
Explainable autonomous robots: a survey and perspective
The definition of “explainability” in the context of autonomous robots (i.e., explainable autonomous robots) is discussed by exploring the question “what is an explanation?” and a research survey is conducted based on this definition.
Building Mental Models through Preview of Autopilot Behaviors
This work introduces a framework, called AutoPreview, to enable humans to preview autopilot behaviors prior to direct interaction with the vehicle, to help users understand autopilot behavior and develop appropriate mental models.
Single-Turn Debate Does Not Help Humans Answer Hard Reading-Comprehension Questions
It is found that explanations in the set-up improve human accuracy, but a baseline condition shows that providing human-selected text snippets does improve accuracy.
Representation Learning of World Models and Estimation of World Model of Others Using Graph2vec
自律ロボットの応用が進んでいる.現状の自律ロボットは, 与えられた命令を忠実に実行することで人間のタスク遂行を補 助するツールである.一方で,より高度な意思決定をする自律 ロボットの場合,命令を忠実に実行することが必ずしも最善の 方策であるとは限らない.このような自律ロボットがユーザか らの信頼を獲得し社会で活躍するには,行動決定の理由を説明
Explainable AI for B5G/6G: Technical Aspects, Use Cases, and Research Challenges
This survey paper highlights the need for XAI towards the upcoming 6G age in every aspect, including 6G technologies and 6G use cases and summarised the lessons learned from the recent attempts and outlined important research challenges in applying XAI for building 6G systems.


Behavior Explanation as Intention Signaling in Human-Robot Teaming
  • Ze Gong, Yu Zhang
  • Computer Science
    2018 27th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN)
  • 2018
This work proposes an approach to explaining robot behavior as intention signaling using natural language sentences and proves that intention signaling can help achieve better teaming by reducing criticism on robot behavior that may appear undesirable but is otherwise required, e.g., due to information asymmetry.
Goal Inference Improves Objective and Perceived Performance in Human-Robot Collaboration
A behavioral experiment indicates that the combination of goal inference and dynamic task planning significantly improves both objective and perceived performance of the human-robot team.
Planning with Verbal Communication for Human-Robot Collaboration
A formalism is proposed that enables a robot to decide optimally between taking a physical action toward task completion and issuing an utterance to the human teammate, which captures the information that the robot uses in its decision making.
Trust calibration within a human-robot team: Comparing automatically generated explanations
This work leverages existing agent algorithms to provide a domain-independent mechanism for robots to automatically generate explanations, and demonstrates that the added explanation capability led to improvement in transparency, trust, and team performance.
Plan explicability and predictability for robot task planning
The notions of plan explicability and predictability are introduced and can be used by agents to proactively choose or directly synthesize plans that are more explicable and predictable to humans.
Anticipating human actions for collaboration in the presence of task and sensor uncertainty
The inference model can robustly anticipate the actions of the human even in the presence of unreliable or noisy detections because of its integration of all its sensing information along with knowledge of task structure.
An implemented theory of mind to improve human-robot shared plans execution
  • S. Devin, R. Alami
  • Biology
    2016 11th ACM/IEEE International Conference on Human-Robot Interaction (HRI)
  • 2016
A framework is developed which allows a robot to estimate the other agents mental states not only about the environment but also about the state of goals, plans and actions and to take them into account when executing human-robot shared plans.
Explanation-Based Reward Coaching to Improve Human Performance via Reinforcement Learning
A novel mechanism for enabling an autonomous system to detect model disparity between itself and a human collaborator, infer the source of the disagreement within the model, evaluate potential consequences of this error, and provide human-interpretable feedback to encourage model correction is proposed.
Autonomous Generation of Robust and Focused Explanations for Robot Policies
This work proposes a method for generating robust and focused explanations that express why a robot chose a particular action and examines the policy based on the state space in which an action was chosen and describes it in natural language.
Learning social affordance grammar from videos: Transferring human interactions to human-robot interactions
A general framework for learning social affordance grammar as a spatiotemporal AND-OR graph (ST-AOG) from RGB-D videos of human interactions is presented, and the grammar is transferred to humanoids to enable a real-time motion inference for human-robot interaction (HRI).