Designing for Responsible Trust in AI Systems: A Communication Perspective

@article{Liao2022DesigningFR,
  title={Designing for Responsible Trust in AI Systems: A Communication Perspective},
  author={Qingzi Vera Liao and S. Shyam Sundar},
  journal={2022 ACM Conference on Fairness, Accountability, and Transparency},
  year={2022}
}
  • Q. LiaoS. Sundar
  • Published 29 April 2022
  • Business, Computer Science
  • 2022 ACM Conference on Fairness, Accountability, and Transparency
Current literature and public discourse on “trust in AI” are often focused on the principles underlying trustworthy AI, with insufficient attention paid to how people develop trust. Given that AI systems differ in their level of trustworthiness, two open questions come to the fore: how should AI trustworthiness be responsibly communicated to ensure appropriate and equitable trust judgments by different users, and how can we protect users from deceptive attempts to earn their trust? We draw from… 

Figures from this paper

References

SHOWING 1-10 OF 70 REFERENCES

The Sanction of Authority: Promoting Public Trust in AI

It is argued that being accountable to the public in ways that earn their trust, through elaborating rules for AI and developing resources for enforcing these rules, is what will ultimately make AI trustworthy enough to be woven into the fabric of the authors' society.

Formalizing Trust in Artificial Intelligence: Prerequisites, Causes and Goals of Human Trust in AI

This work discusses a model of trust inspired by, but not identical to, interpersonal trust as defined by sociologists, and incorporates a formalization of 'contractual trust', such that trust between a user and an AI model is trust that some implicit or explicit contract will hold.

Trust in Automation: Designing for Appropriate Reliance

This review considers trust from the organizational, sociological, interpersonal, psychological, and neurological perspectives, and considers how the context, automation characteristics, and cognitive processes affect the appropriateness of trust.

The relationship between trust in AI and trustworthy machine learning technologies

This paper provides a systematic approach to relate considerations about trust from the social sciences to trustworthiness technologies proposed for AI-based services and products and introduces the concept of Chain of Trust.

Effect of confidence and explanation on accuracy and trust calibration in AI-assisted decision making

It is shown that confidence score can help calibrate people's trust in an AI model, but trust calibration alone is not sufficient to improve AI-assisted decision making, which may also depend on whether the human can bring in enough unique knowledge to complement the AI's errors.

Are Explanations Helpful? A Comparative Study of the Effects of Explanations in AI-Assisted Decision-Making

This paper presents a comparison on the effects of a set of established XAI methods in AI-assisted decision making, and highlights three desirable properties that ideal AI explanations should satisfy—improve people’s understanding of the AI model, help people recognize the model uncertainty, and support people's calibrated trust in the model.

Increasing Trust in AI Services through Supplier's Declarations of Conformity

This paper envisiones an SDoC for AI services to contain purpose, performance, safety, security, and provenance information to be completed and voluntarily released by AI service providers for examination by consumers.

Let Me Explain: Impact of Personal and Impersonal Explanations on Trust in Recommender Systems

It is suggested that RS should provide richer explanations in order to increase their perceived recommendation quality and trustworthiness.

The Who in Explainable AI: How AI Background Shapes Perceptions of AI Explanations

A mixed-methods study of how two different groups of whos—people with and without a background in AI—perceive different types of AI explanations, finding that both groups had unwarranted faith in numbers, to different extents and for different reasons.
...