The relationship between trust in AI and trustworthy machine learning technologies

@article{Toreini2020TheRB,
  title={The relationship between trust in AI and trustworthy machine learning technologies},
  author={Ehsan Toreini and M. Aitken and Kovila P. L. Coopamootoo and Karen Elliott and Carlos Vladimiro Gonzalez Zelaya and A. Moorsel},
  journal={Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency},
  year={2020}
}
To design and develop AI-based systems that users and the larger public can justifiably trust, one needs to understand how machine learning technologies impact trust. To guide the design and implementation of trusted AI-based systems, this paper provides a systematic approach to relate considerations about trust from the social sciences to trustworthiness technologies proposed for AI-based services and products. We start from the ABI+ (Ability, Benevolence, Integrity, Predictability) framework… Expand
Technologies for Trustworthy Machine Learning: A Survey in a Socio-Technical Context
TLDR
It is argued that four categories of system properties are instrumental in achieving the policy objectives, namely fairness, explainability, auditability and safety & security (FEAS), and how these properties need to be considered across all stages of the machine learning life cycle, from data collection through run-time model inference. Expand
The Sanction of Authority: Promoting Public Trust in AI
TLDR
It is argued that being accountable to the public in ways that earn their trust, through elaborating rules for AI and developing resources for enforcing these rules, is what will ultimately make AI trustworthy enough to be woven into the fabric of the authors' society. Expand
Toward Trustworthy AI Development: Mechanisms for Supporting Verifiable Claims
TLDR
This report suggests various steps that different stakeholders can take to improve the verifiability of claims made about AI systems and their associated development processes, with a focus on providing evidence about the safety, security, fairness, and privacy protection of AI systems. Expand
Trustworthy AI
TLDR
The tutorial on “Trustworthy AI” is proposed to address six critical issues in enhancing user and public trust in AI systems, namely: bias and fairness, explainability, robust mitigation of adversarial attacks, improved privacy and security in model building, and being decent. Expand
To Trust or Not to Trust a Regressor: Estimating and Explaining Trustworthiness of Regression Predictions
TLDR
RETRO-VIZ, a method for estimating and explaining trustworthiness of regression predictions, is introduced and it is found that RETRO-scores negatively correlate with prediction error across 117 experimental settings, indicating thatRETRO provides a useful measure to distinguish trustworthy predictions from untrustworthy ones. Expand
On Mismatched Detection and Safe, Trustworthy Machine Learning
  • K. Varshney
  • Computer Science
  • 2020 54th Annual Conference on Information Sciences and Systems (CISS)
  • 2020
TLDR
By taking advantage of performance characterizations in that literature to better understand the various machine learning issues, this work can take advantage of the power of the theory of mismatched hypothesis testing from statistical signal processing. Expand
AI-Blueprint for Deep Neural Networks
TLDR
This work considers methods and metrics at different AI development phases that shall be used to achieve higher confidence in the satisfaction of trustworthiness properties of a developed system, to support the development of trustworthy systems. Expand
A survey on artificial intelligence assurance
TLDR
This manuscript provides a systematic review of research works that are relevant to AI assurance, between years 1985 and 2021, and aims to provide a structured alternative to the landscape. Expand
An Interpretable Graph-based Mapping of Trustworthy Machine Learning Research
TLDR
This paper builds a co-occurrence network of words using a web-scraped corpus of more than 7,000 peer-reviewed recent ML papers—consisting of papers both related and unrelated to TwML to characterize the comprehension of TwML research. Expand
Towards an Equitable Digital Society: Artificial Intelligence (AI) and Corporate Digital Responsibility (CDR)
TLDR
The paper seeks to harmonise and align approaches, illustrating the opportunities and threats of AI, while raising awareness of Corporate Digital Responsibility (CDR) as a potential collaborative mechanism to demystify governance complexity and to establish an equitable digital society. Expand
...
1
2
3
4
...

References

SHOWING 1-10 OF 107 REFERENCES
Trusting Intelligent Machines: Deepening Trust Within Socio-Technical Systems
Intelligent machines have reached capabilities that go beyond a level that a human being can fully comprehend without sufficiently detailed understanding of the underlying mechanisms. The choice ofExpand
Towards the Science of Security and Privacy in Machine Learning
TLDR
It is shown that there are (possibly unavoidable) tensions between model complexity, accuracy, and resilience that must be calibrated for the environments in which they will be used, and formally explores the opposing relationship between model accuracy and resilience to adversarial manipulation. Expand
Why do we trust new technology? A study of initial trust formation with organizational information systems
TLDR
Results indicate that subjective norm and the cognitive-reputation, calculative, and organizational situational normality base factors significantly influence initial trusting beliefs and other downstream trust constructs. Expand
Principled Artificial Intelligence: Mapping Consensus in Ethical and Rights-Based Approaches to Principles for AI
The rapid spread of artificial intelligence (AI) systems has precipitated a rise in ethical and human rights-based frameworks intended to guide the development and use of these technologies. DespiteExpand
"Why Should I Trust You?": Explaining the Predictions of Any Classifier
TLDR
LIME is proposed, a novel explanation technique that explains the predictions of any classifier in an interpretable and faithful manner, by learning aninterpretable model locally varound the prediction. Expand
SoK: Security and Privacy in Machine Learning
TLDR
It is apparent that constructing a theoretical understanding of the sensitivity of modern ML algorithms to the data they analyze, à la PAC theory, will foster a science of security and privacy in ML. Expand
Measuring trust inside organisations
Purpose – The purpose of this paper is to examine the extent to which measures and operationalisations of intra‐organisational trust reflect the essential elements of the existing conceptualisationExpand
SafetyNets: Verifiable Execution of Deep Neural Networks on an Untrusted Cloud
TLDR
SafetyNets develops and implements a specialized interactive proof protocol for verifiable execution of a class of deep neural networks, i.e., those that can be represented as arithmetic circuits and demonstrates the run-time costs of this framework for both the client and server are low. Expand
Security Evaluation of Support Vector Machines in Adversarial Environments
TLDR
A formal general framework for the empirical evaluation of the security of machine-learning systems is introduced and the feasibility of evasion, poisoning and privacy attacks against SVMs in real-world security problems is demonstrated. Expand
Moving from trust to trustworthiness: Experiences of public engagement in the Scottish Health Informatics Programme
TLDR
This paper aims to move beyond simple descriptions of whether publics trust researchers, or in whom members of the public place their trust, and to explore more fully the bases of public trust in science, what trust implies and equally what it means for research/researchers to be trustworthy. Expand
...
1
2
3
4
5
...