How to Support Users in Understanding Intelligent Systems? Structuring the Discussion

  title={How to Support Users in Understanding Intelligent Systems? Structuring the Discussion},
  author={Malin Eiband and Daniel Buschek and Heinrich Hussmann},
  journal={26th International Conference on Intelligent User Interfaces},
The opaque nature of many intelligent systems violates established usability principles and thus presents a challenge for human-computer interaction. Research in the field therefore highlights the need for transparency, scrutability, intelligibility, interpretability and explainability, among others. While all of these terms carry a vision of supporting users in understanding intelligent systems, the underlying notions and assumptions about users and their interaction with the system often… 

Figures and Tables from this paper

A Cognitive Framework for Delegation Between Error-Prone AI and Human Agents
The use of cognitively inspired models of behavior is investigated, and the predicted behavior is used to delegate control between humans and AI agents through the use of an intermediary entity, which allows overcoming potential shortcomings of either humans or agents in the pursuit of a goal.
Designing for Continuous Interaction with Artificial Intelligence Systems
This SIG supports the exchange of cutting-edge research contributing to a better understanding and improved methods and tools to design continuous Human-AI interaction.
GANSlider: How Users Control Generative Models for Images using Multiple Sliders with and without Feedforward Information
Fundamental UI design factors and resulting interaction behavior in this context are quantified, revealing opportunities for improvement in the UI design for interactive applications of generative models.
Modeling Human Behavior Part I - Learning and Belief Approaches
The main objective of this paper is to provide a succinct yet systematic review of the most important approaches in two areas dealing with quantitative models of human behaviors, and to directly model mechanisms of human reasoning, such as beliefs and bias, without going necessarily learning via trial-and-error.
Modeling Human Behavior Part II - Cognitive approaches and Uncertainty
As we discussed in Part I of this topic [30], there is a clear desire to model and comprehend human behavior. Given the popular presupposition of human reasoning as the standard for learning and
Understanding the Necessary Conditions of Multi-Source Trust Transfer in Artificial Intelligence
Trust transfer is a promising perspective on prevalent discussions about trust in AI-capable technologies. However, the convergence of AI with other technologies challenges existing theoretical
Achieving Trustworthy Artificial Intelligence: Multi-Source Trust Transfer in Artificial In- telligence-capable Technology
A model with a focus on multi-source trust transfer while including the theoretical framework of trustduality is developed, providing a novel theoretical perspective on establishing trustworthy AI by validating the importance of the duality of trust.
Design Considerations for Usable Authentication in Smart Homes
What kind of devices users would choose and why, potential challenges regarding privacy and security, and potential solutions are found, and a set of design implications for usable authentication mechanisms for smart homes are derived and reflected on.
Designing Creative AI Partners with COFI: A Framework for Modeling Interaction in Human-AI Co-Creative Systems
Human-AI co-creativity involves both humans and AI collaborating on a shared creative product as partners. In a creative collaboration, interaction dynamics, such as turn-taking, contribution type,
The Who in Explainable AI: How AI Background Shapes Perceptions of AI Explanations
A mixed-methods study of how two different groups of whos—people with and without a background in AI—perceive different types of AI explanations, finding that both groups had unwarranted faith in numbers, to different extents and for different reasons.


Normative vs. Pragmatic: Two Perspectives on the Design of Explanations in Intelligent Systems
While the normative view ensures a minimal standard as a “right to explanation”, the pragmatic view is likely the more challenging perspective and will benefit the most from knowledge and research in HCI to ensure a usable integration of explanations into intelligent systems.
Steps to take before intelligent user interfaces become real
  • K. Höök
  • Computer Science
    Interact. Comput.
  • 2000
Why and why not explanations improve the intelligibility of context-aware intelligent systems
It is shown that explanations describing why the system behaved a certain way resulted in better understanding and stronger feelings of trust, and automatically providing explanations about a system's decision process can help mitigate this problem.
How it works: a field study of non-technical users interacting with an intelligent system
An investigation into how users come to understand an intelligent system as they use it in their daily work suggests an appropriate level of feedback for user interfaces of intelligent systems, provides a baseline level of complexity for user understanding, and highlights the challenges of making users aware of sensed inputs for such systems.
Are explanations always important?: a study of deployed, low-cost intelligent interactive systems
The results of two studies examining the comprehensibility of, and desire for explanations with deployed, low-cost IIS reveal that comprehensibility is not always dependent on explanations, and the perceived cost of viewing explanations tends to outweigh the anticipated benefits.
Bringing Transparency Design into Practice
A stage-based participatory process for designing transparent interfaces incorporating perspectives of users, designers, and providers is contributing to advance existing UI guidelines for more transparency in complex real-world design scenarios involving multiple stakeholders.
Evaluating Visual Explanations for Similarity-Based Recommendations: User Perception and Performance
A participatory process of designing explanation interfaces with multiple explanatory goals for three similarity-based recommendation models is introduced and it is suggested that the user-preferred interface may not guarantee the same level of performance.
Power to the People: The Role of Humans in Interactive Machine Learning
It is argued that the design process for interactive machine learning systems should involve users at all stages: explorations that reveal human interaction patterns and inspire novel interaction methods, as well as refinement stages to tune details of the interface and choose among alternatives.
Tell me more?: the effects of mental model soundness on personalizing an intelligent agent
The results suggest that by helping end users understand a system's reasoning, intelligent agents may elicit more and better feedback, thus more closely aligning their output with each user's intentions.
Guidelines for Human-AI Interaction
This work proposes 18 generally applicable design guidelines for human-AI interaction that can serve as a resource to practitioners working on the design of applications and features that harness AI technologies, and to researchers interested in the further development of human- AI interaction design principles.