• Corpus ID: 211204699

Do you comply with AI? - Personalized explanations of learning algorithms and their impact on employees' compliance behavior

@article{Khl2020DoYC,
  title={Do you comply with AI? - Personalized explanations of learning algorithms and their impact on employees' compliance behavior},
  author={Niklas K{\"u}hl and Jodie Lobana and Christian Meske},
  journal={ArXiv},
  year={2020},
  volume={abs/2002.08777}
}
Machine Learning algorithms are technological key enablers for artificial intelligence (AI). Due to the inherent complexity, these learning algorithms represent black boxes and are difficult to comprehend, therefore influencing compliance behavior. Hence, compliance with the recommendations of such artifacts, which can impact employees' task performance significantly, is still subject to research - and personalization of AI explanations seems to be a promising concept in this regard. In our… 

Figures from this paper

I Don't Get IT, but IT seems Valid! The Connection between Explainability and Comprehensibility in (X)AI Research

In an empirical study, 165 participants are asked about the perceived explainability of different machine learning models and an XAI augmentation and the results reveal high comprehensibility and problem-solving performance of X AI augmentation compared to the tested machineLearning models.

Intelligent Decision Assistance Versus Automated Decision-Making: Enhancing Knowledge Work Through Explainable Artificial Intelligence

This work conceptualizes a new class of DSS, namely Intelligent Decision Assistance (IDA), based on a literature review of two different research streams—DSS and automation, and proposes to use techniques of Explainable AI (XAI) while withholding concrete AI recommendations.

Explainable Artificial Intelligence: Objectives, Stakeholders, and Future Research Opportunities

This research note describes exemplary risks of black-box AI, the consequent need for explainability, and previous research on Explainable AI (XAI) in information systems research.

Explainable AI for tailored electricity consumption feedback-An experimental evaluation of visualizations

This work investigated the application of XAI in an area where specific insights can have a significant effect on consumer behaviour, namely electricity use, and created five visualizations with ML and XAI methods from electricity consumption time series for highly personalized feedback, considering existing domain-specific design knowledge.

Should I Follow AI-based Advice? Measuring Appropriate Reliance in Human-AI Decision-Making

It is proposed to view AR as a two-dimensional construct that measures the ability to discriminate advice quality and behave accordingly, and derive the measurement concept, illustrate its application and outline potential future research.

Reviewing the Need for Explainable Artificial Intelligence (xAI)

A systematic review of xAI literature on the topic identifies four thematic debates central to how xAI addresses the black-box problem and synthesizes the findings into a future research agenda to further the xAI body of knowledge.

Training Novices: The Role of Human-AI Collaboration and Knowledge Transfer

A framework on how HAIC can be utilized to train novices on particular tasks is proposed and the role of explicit and tacit knowledge in this training process via HAIC is illustrated, as well as a preliminary experiment design to assess the ability of AI systems in HAIC to act as a trainer to transfer TSEK toNovices who do not possess prior TSEk.

Transparency and Trust in Human-AI-Interaction: The Role of Model-Agnostic Explanations in Computer Vision-Based Decision Support

Theoretical discussion highlights that XAI can support trust in Computer Vision systems, and AI systems in general, especially through an increased understandability and predictability, and empirical results show that the AI sometimes used questionable or irrelevant data features of an image to detect malaria.

Using Explainable Artificial Intelligence to Increase Trust in Computer Vision

Theoretical discussion highlights that XAI can support trust in Computer Vision systems, and AI systems in general, especially through an increased understandability and predictability, and empirical results show that the AI sometimes used questionable or irrelevant data features of an image to detect malaria.

AI-Assisted and Explainable Hate Speech Detection for Social Media Moderators - A Design Science Approach

Results show that the instantiated design knowledge in form of a dashboard is perceived as valuable and that XAI features increase the perception of the artifact’s usefulness, ease of use, trustworthiness as well as the intention to use it.

References

SHOWING 1-10 OF 29 REFERENCES

Personalized Explanation for Machine Learning: a Conceptualization

This work derives a conceptualization of personalized explanation by defining and structuring the problem based on prior work on machine learning explanation, personalization (in machine learning) and concepts and techniques from other domains such as privacy and knowledge elicitation.

Tell me more?: the effects of mental model soundness on personalizing an intelligent agent

The results suggest that by helping end users understand a system's reasoning, intelligent agents may elicit more and better feedback, thus more closely aligning their output with each user's intentions.

Hybrid Intelligence

It is argued that the most likely paradigm for the division of labor between humans and machines in the next decades is Hybrid Intelligence, which aims at using the complementary strengths of human intelligence and AI, so that they can perform better than each of the two could separately.

When Will AI Exceed Human Performance? Evidence from AI Experts

The results from a large survey of machine learning researchers on their beliefs about progress in AI suggest there is a 50% chance of AI outperforming humans in all tasks in 45 years and of automating all human jobs in 120 years.

On cognitive preferences and the plausibility of rule-based models

It is argued that—all other things being equal—longer explanations may be more convincing than shorter ones, and that the predominant bias for shorter models may not be suitable when it comes to user acceptance of the learned models.

Explanations From Intelligent Systems: Theoretical Foundations and Implications for Practice

Empirical studies, mainly with knowledge-based systems, are reviewed and linked to a sound theoretical base, which combines a cognitive effort perspective, cognitive learning theory, and Toulmin's model of argumentation.

Improving Employees' Compliance Through Information Systems Security Training: An Action Research Study

This study proposes a training program based on two theories: the universal constructive instructional theory and the elaboration likelihood model and validate the training program for IS security policy compliance training through an action research project.

Self-efficacy and mental models in learning to program

The results show that self-efficacy for programming is influenced by previous programming experience and increases as a student progresses through an introductory programming course, and that both the mental model and self- efficacy affect course performance.

Improving End-User Proficiency: Effects of Conceptual Training and Nature of Interaction

It is suggested that end-user performance is enhanced through training methods that provide good conceptual models but only if users form conceptual mental models and retain them.

Artificial Intelligence, Jobs and the Future of Work: Racing with the Machines

Abstract Artificial intelligence is rapidly entering our daily lives in the form of driverless cars, automated online assistants and virtual reality experiences. In so doing, AI has already