On the Relation of Trust and Explainability: Why to Engineer for Trustworthiness

@article{Kastner2021OnTR,
  title={On the Relation of Trust and Explainability: Why to Engineer for Trustworthiness},
  author={Lena Kastner and Markus Langer and Veronika Lazar and Astrid Schomacker and Timo Speith and Sarah Sterz},
  journal={2021 IEEE 29th International Requirements Engineering Conference Workshops (REW)},
  year={2021},
  pages={169-175}
}
  • Lena Kastner, Markus Langer, Sarah Sterz
  • Published 11 August 2021
  • Business, Computer Science
  • 2021 IEEE 29th International Requirements Engineering Conference Workshops (REW)
Recently, requirements for the explainability of software systems have gained prominence. One of the primary motivators for such requirements is that explainability is expected to facilitate stakeholders’ trust in a system. Although this seems intuitively appealing, recent psychological studies indicate that explanations do not necessarily facilitate trust. Thus, explainability requirements might not be suitable for promoting trust.One way to accommodate this finding is, we suggest, to focus on… 

Designing for Conversational System Trustworthiness: The Impact of Model Transparency on Trust and Task Performance

TLDR
This study investigated how varying model confidence and making confidence levels transparent to the user may influence perceptions of trust and performance in an information retrieval task assisted by a conversational system.

The Value of Measuring Trust in AI - A Socio-Technical System Perspective

TLDR
This work provides a starting point for researchers and designers to re-evaluate the current focus on trust in AI, improving the alignment between what empirical research paradigms may offer and the expectations of real-world human-AI interactions.

How to Evaluate Explainability? - A Case for Three Criteria

TLDR
This vision paper will provide a multidisciplinary motivation for three such quality criteria concerning the information that systems should provide: comprehensibility, reliability, and assessability, and fuel the discussion regarding these criteria, such that adequate evaluation methods for them will be conceived.

“There Is Not Enough Information”: On the Effects of Explanations on Perceptions of Informational Fairness and Trustworthiness in Automated Decision-Making

Automated decision systems (ADS) are increasingly used for consequential decision-making. These systems often rely on sophisticated yet opaque machine learning models, which do not allow for

A Means-End Account of Explainable Artificial Intelligence

Explainable artificial intelligence (XAI) seeks to produce explanations for those machine learning methods which are deemed opaque. However, there is considerable disagreement about what this means

Post-Hoc Explanations Fail to Achieve their Purpose in Adversarial Contexts

TLDR
It is shown that post-hoc explanation algorithms are unsuitable to achieve the transparency objectives inherent to the legal norms, and there is a need to more explicitly discuss the objectives underlying “explainability” obligations as these can often be better achieved through other mechanisms.

A Review of Taxonomies of Explainable Artificial Intelligence (XAI) Methods

TLDR
This paper will review recent approaches to constructing taxonomies of XAI methods and discuss general challenges concerning them as well as their individual advantages and limitations, and propose and discuss three possible solutions.

References

SHOWING 1-10 OF 105 REFERENCES

Formalizing Trust in Artificial Intelligence: Prerequisites, Causes and Goals of Human Trust in AI

TLDR
This work discusses a model of trust inspired by, but not identical to, interpersonal trust as defined by sociologists, and incorporates a formalization of 'contractual trust', such that trust between a user and an AI model is trust that some implicit or explicit contract will hold.

Trust and Trustworthiness

In this chapter, we discuss when, how, and why trust and trustworthiness arise to support cooperation within and across organizations. To do so, we first define trust and trustworthiness, discuss how

Trustworthiness

Abstract I argue that trustworthiness is an epistemic desideratum. It does not reduce to justified or reliable true belief, but figures in the reason why justified or reliable true beliefs are often

Two Challenges for CI Trustworthiness and How to Address Them

We argue that, to be trustworthy, Computational Intelligence (CI) has to do what it is entrusted to do for permissible reasons and to be able to give rationalizing explanations of its behavior which

Explanation and trust: what to tell the user in security and AI?

  • W. Pieters
  • Computer Science
    Ethics and Information Technology
  • 2010
TLDR
This paper investigates the relation between explanation and trust in the context of computing science and applies the conceptual framework to both AI and information security, and shows the benefit of the framework for both fields by means of examples.

Exploring Explainability: A Definition, a Model, and a Knowledge Catalogue

TLDR
An interdisciplinary Systematic Literature Review is conducted and a definition, a model, and a catalogue for explainability are proposed that illustrate how explainability interacts with other quality aspects and how it may impact various quality dimensions of a system.

Trust in Automation: Designing for Appropriate Reliance

TLDR
This review considers trust from the organizational, sociological, interpersonal, psychological, and neurological perspectives, and considers how the context, automation characteristics, and cognitive processes affect the appropriateness of trust.

How To Be Trustworthy

The book articulates and defends a core notion of trustworthiness as avoiding unfulfilled commitments. This is motivated via accounts of both trust and distrust in terms of perceived commitment.

How Much Information?: Effects of Transparency on Trust in an Algorithmic Interface

TLDR
This work focuses on how transparent design of algorithmic interfaces can promote awareness and foster trust, using an online field experiment to test three levels of system transparency in the high-stakes context of peer assessment.

Explainability as a non-functional requirement: challenges and recommendations

TLDR
The relationship between explanations and transparency and its impact on software quality is assessed and recommendations for the elicitation and analysis of explainability are offered and strategies for the practice are discussed.
...