Examining the effects of power status of an explainable artificial intelligence system on users’ perceptions

@article{Ha2020ExaminingTE,
  title={Examining the effects of power status of an explainable artificial intelligence system on users’ perceptions},
  author={Taehyun Ha and Young June Sah and Yuri Park and Sangwon Lee},
  journal={Behaviour \& Information Technology},
  year={2020},
  volume={41},
  pages={946 - 958},
  url={https://api.semanticscholar.org/CorpusID:229484089}
}
Investigation of how people attribute the perceived ability of XAI systems based on perceived attributional qualities and how the power status of the XAI and anthropomorphism affect the attribution process indicated that an XAI system with a higher power status caused users to perceive the outputs of theXAI system to be more controllable by intention, and higher perceived stability and uncontrollability resulted in greater confidence in the system’s ability.

Explainable artificial intelligence in information systems: A review of the status quo and future research directions

This work provides an overview of the most receptive outlets, the development of the academic discussion, and the most relevant underlying concepts and methodologies of XAI research in IS in general and electronic markets in particular using a structured literature review.

ARTIFICIAL INTELLIGENCE (AI) AND THE IMPACT OF ENHANCING THE CONSISTENCY AND INTERPRETATION OF FINANCIAL STATEMENT IN THE CLASSIFIED HOTELS IN AQABA, JORDAN

This study offers a deeper understanding of the role of clarifies how artificial intelligence works creatively with accounting systems to help managers in hotel establishments produce high-quality accounting information through reduce information risks.

The Effect of Artificial Intelligence (AI) on the Quality and Interpretation of Financial Statements in the Hotels Classified in the AQABA Special Economic Zone (ASEZA)

The findings of a basic linear regression study of the impact of AI implemented in Jordanian hotels on the integration of accounting information systems and the association between AI and the integration of accounting information systems indicate that the fixed limit value amounted to (2.060) and the value of (Beta) for T-test.

Enhancing Generative AI Usage for Employees: Key Drivers and Barriers

Results suggest that employees' perceived Gen-AI intelligence and warmth positively impact their usage through the mediation of performance expectancy, and that the perceived severity of Gen-AI has a negative influence on employees’ usage.

eXplainable artificial intelligence (XAI) in business management research: a success/failure system perspective

This study collects and analyzes business management research related to XAI using common management keywords as the basis and utilizes a success/failure system to explore how this theory can be applied to artificial intelligence and business management research.

Roles of Attribution and Government Intervention in the Trust Repair Process During the COVID-19 Pandemic

This study, bases on both the attribution theory and trust repair theory, and explores the effects of attribution and government intervention in the trust repair process, willingness to reconcile and

Explainable AI: Definition and attributes of a good explanation for health AI

To realise the potential of AI, it is critical to shed light on two fundamental questions of explanation for safety–critical AI such as health-AI that remain unanswered: What is an explanation in health-AI and what are the attributes of a good explanation in health-AI.

Anthropomorphism in AI-enabled technology: A literature review

A descriptive literature review of 55 studies seeks to identify research trends, AIET types, theoretical foundations, and methods, and the proposed conceptual framework for exploring the interplay of anthropomorphism with its antecedents and consequences provides a nomological network for future research.

Supporting User Critiques of AI Systems via Training Dataset Explanations: Investigating Critique Properties and the Impact of Presentation Style

This work investigates how two presentation styles for training dataset explanations support users’ critique of an automated hiring system and shows that presentation style can impact critique emphasis, critique accuracy, and subjective impressions of explanation utility.

Why and why not explanations improve the intelligibility of context-aware intelligent systems

It is shown that explanations describing why the system behaved a certain way resulted in better understanding and stronger feelings of trust, and automatically providing explanations about a system's decision process can help mitigate this problem.

The Impact of Causal Attributions on System Evaluation in Usability Tests

At large, the results suggest that there are notable influences of users' attribution patterns on their evaluation of system quality, especially in situations of success.

Explainable Artificial Intelligence (XAI)

This research paper delves into the essence of XAI, unraveling its significance across diverse domains such as healthcare, finance, and criminal justice and scrutinizes the delicate balance between interpretability and performance, shedding light on instances where the pursuit of accuracy may compromise explain-ability.

A Meta-Analysis of Relationships Linking Service Failure Attributions to Customer Outcomes

When they experience service failures, customers look for causes. They seek to understand whether the service firm could have prevented the failure (controllability attribution) and whether the cause

The Ultimate Attribution Error: Extending Allport's Cognitive Analysis of Prejudice

Allport's The Nature of Prejudice is a social psychological classic. Its delineation of the components and principles of prejudice remains modern, especially its handling of cognitive factors. The

On seeing human: a three-factor theory of anthropomorphism.

A theory to explain when people are likely to anthropomorphize and when they are not is described, focused on three psychological determinants--the accessibility and applicability of anthropocentric knowledge, the motivation to explain and understand the behavior of other agents, and the desire for social contact and affiliation.

Individual Differences in the Calibration of Trust in Automation

Attributing the cause of automation errors to factors external to the automation fosters an understanding of tasks and situations in which automation differs in reliability and may lead to more appropriate trust.

A theory of motivation for some classroom experiences.

It appears that a general theory of motivation is under development that has important implications for the understanding of classroom thought and behavior.

Evaluating Effects of User Experience and System Transparency on Trust in Automation

It is found that trust of entirety evolves and eventually stabilizes as an operator repeatedly interacts with a technology, and a higher level of automation transparency may mitigate the “cry wolf’ effect.
...