Not all users are the same: Providing personalized explanations for sequential decision making problems

  title={Not all users are the same: Providing personalized explanations for sequential decision making problems},
  author={Utkarsh Soni and Sarath Sreedharan and Subbarao Kambhampati},
  journal={2021 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)},
There is a growing interest in designing robots that can work alongside humans. Such robots will undoubtedly be expected to explain their behavior and decisions. While generating explanations is an actively researched topic, most works tend to focus on methods that generate explanations that are one size fits all. As in the specifics of the user-model are completely ignored. The handful of works that look at tailoring their explanation to the user’s background rely on having specific models of… 

Figures and Tables from this paper

Explaining Preference-driven Schedules: the EXPRES Framework
This paper introduces the EXPRES framework, which can explain why a given preference was unsatisfied in a given optimal schedule, and shows that employees preferred the explanations generated by EXPRES over human-generated ones when considering workforce scheduling scenarios.
A Mental-Model Centric Landscape of Human-AI Symbiosis
A significantly general version of human-aware AI interaction scheme, called generalizedhuman-aware interaction (GHAI), that talks about (mental) models of six types that allows us to capture the various works done in the space ofhuman-AI interaction and identify the fundamental behavioral patterns supported by these works.


Model-Free Model Reconciliation
A simple and easy to learn labeling model that can help an explainer decide what information could help achieve model reconciliation between the user and the agent.
Hierarchical Expertise Level Modeling for User Specific Contrastive Explanations
This work reduces the problem of generating an explanation to a search over the space of abstract models and shows that while the complete problem is NP-hard, a greedy algorithm can provide good approximations of the optimal solution.
Online Explanation Generation for Human-Robot Teaming
It is argued that explanations, especially those of a complex nature, should be made in an online fashion during the execution, which helps spread out the information to be explained and thus reduce the mental workload of humans in highly cognitive demanding tasks.
A Decision-Theoretic Model of Assistance
The problem of intelligent assistance in a decision-theoretic framework is formed, and it is shown that in all three domains the framework results in an assistant that substantially reduces user effort with only modest computation.
Plan Explanations as Model Reconciliation: Moving Beyond Explanation as Soliloquy
It is shown how explanation can be seen as a "model reconciliation problem" (MRP), where the AI system in effect suggests changes to the human's model, so as to make its plan be optimal with respect to that changed human model.
Efficient Model Learning from Joint-Action Demonstrations for Human-Robot Collaborative Tasks
Results indicate that learning human user models from joint-action demonstrations and encoding them in a MOMDP formalism can support effective teaming in human-robot collaborative tasks.
Handling Model Uncertainty and Multiplicity in Explanations via Model Reconciliation
This paper shows how the explanation process evolves in the presence of such model uncertainty or incompleteness by generating conformant explana- tions that are applicable to a set of possible models and demonstrates the trade-offs in the different forms of explanations.
Making Hybrid Plans More Clear to Human Users - A Formal Approach for Generating Sound Explanations
This paper presents a formal approach to plan explanation, in which information about plans is represented as first-order logic formulae and explanations are constructed as proofs in the resulting axiomatic system.