On the Influence of Explainable AI on Automation Bias

@article{Schemmer2022OnTI,
  title={On the Influence of Explainable AI on Automation Bias},
  author={Maximilian Schemmer and Niklas K{\"u}hl and Carina Benz and Gerhard Satzger},
  journal={ArXiv},
  year={2022},
  volume={abs/2204.08859}
}
Artificial intelligence (AI) is gaining momentum, and its importance for the future of work in many areas, such as medicine and banking, is continuously rising. However, insights on the effective collaboration of humans and AI are still rare. Typically, AI supports humans in decision-making by addressing human limitations. However, it may also evoke human bias, especially in the form of automation bias as an over-reliance on AI advice. We aim to shed light on the potential to influence… 

Figures and Tables from this paper

A Meta-Analysis of the Utility of Explainable Artificial Intelligence in Human-AI Decision-Making

An initial synthesis of existing research on XAI studies using a statistical meta-analysis to derive implications across existing research finds a statistically positive impact of XAI on users' performance, and indicates that human-AI decision-making tends to yield better task performance on text data.

On the Effect of Information Asymmetry in Human-AI Teams

It is demonstrated that humans can use contextual information to adjust the AI’s decision, resulting in complementary team performance (CTP), as in many real-world situations, humans have access to different contextual information.

An Empirical Evaluation of Predicted Outcomes as Explanations in Human-AI Decision-Making

. In this work, we empirically examine human-AI decision-making in the presence of explanations based on estimated outcomes. This type of explanation provides a human decision-maker with expected

Painting the black box white: experimental findings from applying XAI to an ECG reading setting

A questionnaire-based experiment is designed and conducted by which 44 cardiology residents and specialists are involved in an AI-supported ECG reading task and the relationship between users’ characteristics and their perception of AI and XAI systems is investigated, providing a contribution to the evaluation of AI-based support systems from a Human-AI interaction-oriented perspective.

On Explanations, Fairness, and Appropriate Reliance in Human-AI Decision-Making

Explanations have been framed as an essential feature for better and fairer human-AI decision-making. In the context of fairness, this has not been appropriately studied, as prior works have mostly

A Case Study in Engineering a Conversational Programming Assistant's Persona

The Programmer’s Assistant is an experimental prototype software development environment that integrates a chatbot with a code editor that establishes a conversational interaction pattern, a set of conventions.

Appropriate Reliance on AI Advice: Conceptualization and the Effect of Explanations

AI advice is becoming increasingly popular, e.g., in investment and medical treatment decisions. As this advice is typically imperfect, decision-makers have to exert discretion as to whether actually

References

SHOWING 1-10 OF 53 REFERENCES

Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)

This survey provides an entry point for interested researchers and practitioners to learn key aspects of the young and rapidly growing body of research related to XAI, and review the existing approaches regarding the topic, discuss trends surrounding its sphere, and present major research trajectories.

Explainable Artificial Intelligence: Objectives, Stakeholders, and Future Research Opportunities

This research note describes exemplary risks of black-box AI, the consequent need for explainability, and previous research on Explainable AI (XAI) in information systems research.

Effect of confidence and explanation on accuracy and trust calibration in AI-assisted decision making

It is shown that confidence score can help calibrate people's trust in an AI model, but trust calibration alone is not sufficient to improve AI-assisted decision making, which may also depend on whether the human can bring in enough unique knowledge to complement the AI's errors.

Artificial intelligence, bias and clinical safety

This analysis is written with the dual aim of helping clinical safety professionals to critically appraise current medical AI research from a quality and safety perspective, and supporting research and development in AI by highlighting some of the clinical safety questions that must be considered if medical application of these exciting technologies is to be successful.

Does the Whole Exceed its Parts? The Effect of AI Explanations on Complementary Team Performance

This work conducts mixed-method user studies on three datasets, where an AI with accuracy comparable to humans helps participants solve a task (explaining itself in some conditions), and observes complementary improvements from AI augmentation that were not increased by explanations.

Conceptualising Artificial Intelligence as a Digital Healthcare Innovation: An Introductory Review

  • Anmol Arora
  • Computer Science, Medicine
    Medical devices
  • 2020
This review uses established management literature to explore artificial intelligence as a digital healthcare innovation and highlight potential risks and opportunities.

Beyond Accuracy: The Role of Mental Models in Human-AI Team Performance

This work highlights two key properties of an AI’s error boundary, parsimony and stochasticity, and a property of the task, dimensionality, and shows experimentally how these properties affect humans’ mental models of AI capabilities and the resulting team performance.

Automation Use and Automation Bias

The availability of automation and automated decision aids feeds into a general human tendency to travel the road of least cognitive effort. A series of studies on “automation bias,” the tendency to

Hybrid Intelligence

It is argued that the most likely paradigm for the division of labor between humans and machines in the next decades is Hybrid Intelligence, which aims at using the complementary strengths of human intelligence and AI, so that they can perform better than each of the two could separately.

Metrics for Explainable AI: Challenges and Prospects

This paper discusses specific methods for evaluating the goodness of explanations, whether users are satisfied by explanations, how well users understand the AI systems, and how the human-XAI work system performs.
...