• Corpus ID: 215768885

Toward Trustworthy AI Development: Mechanisms for Supporting Verifiable Claims

@article{Brundage2020TowardTA,
  title={Toward Trustworthy AI Development: Mechanisms for Supporting Verifiable Claims},
  author={Miles Brundage and Shahar Avin and Jasmine Wang and Haydn Belfield and Gretchen Krueger and Gillian K. Hadfield and Heidy Khlaaf and Jingying Yang and Helen Toner and Ruth Fong and Tegan Maharaj and Pang Wei Koh and Sara Hooker and Jade Leung and Andrew Trask and Emma Bluemke and Jonathan Lebensbold and Cullen O'Keefe and Mark Koren and Theo Ryffel and J. B. Rubinovitz and Tamay Besiroglu and Federica Carugati and Jack Clark and Peter Eckersley and Sarah de Haas and Maritza L. Johnson and Ben Laurie and Alex Ingerman and Igor Krawczuk and Amanda Askell and Rosario Cammarota and Andrew J. Lohn and David Krueger and Charlotte Stix and Peter Henderson and Logan Graham and Carina Prunkl and Bianca Martin and Elizabeth Seger and Noa Zilberman and Se'an 'O h'Eigeartaigh and Frens Kroeger and Girish Sastry and Rebecca Kagan and Adrian Weller and Brian Tse and Elizabeth Barnes and Allan Dafoe and Paul Scharre and Ariel Herbert-Voss and Martijn Rasser and Shagun Sodhani and Carrick Flynn and Thomas Krendl Gilbert and Lisa Dyer and Saif Khan and Yoshua Bengio and Markus Anderljung},
  journal={ArXiv},
  year={2020},
  volume={abs/2004.07213}
}
With the recent wave of progress in artificial intelligence (AI) has come a growing awareness of the large-scale impacts of AI systems, and recognition that existing regulations and norms in industry and academia are insufficient to ensure responsible AI development. In order for AI developers to earn trust from system users, customers, civil society, governments, and other stakeholders that they are building AI responsibly, they will need to make verifiable claims to which they can be held… 

Trustworthy AI: From Principles to Practices

This review provides AI practitioners with a comprehensive guide for building trustworthy AI systems and introduces the theoretical framework of important aspects of AI trustworthiness, including robustness, generalization, explainability, transparency, reproducibility, fairness, privacy preservation, and accountability.

AI Certification: Advancing Ethical Practice by Reducing Information Asymmetries

O Ongoing changes in AI technology suggest that AI certification regimes should be designed to emphasize governance criteria of enduring value, such as ethics training for AI developers, and to adjust technical criteria as the technology changes.

Situated Accountability: Ethical Principles, Certification Standards, and Explanation Methods in Applied AI

This study illustrates important flaws in the current enactment of accountability as an ethical and social value which, if left unchecked, risks undermining the pursuit of responsible AI.

Governing AI safety through independent audits

This Perspective proposes a pragmatic approach where independent audit of AI systems is central and would embody three AAA governance principles: prospective risk Assessments, operation Audit trails and system Adherence to jurisdictional requirements.

Audit and Assurance of AI Algorithms: A framework to ensure ethical algorithmic practices in Artificial Intelligence

The critical areas required for auditing and assurance of algorithms developed to professionalize and industrialize AI, machine learning, and related algorithms are reviewed and spark discussion in this novel field of study and practice.

Putting AI ethics to work: are the tools fit for purpose?

An assessment of these practical frameworks with the lens of known best practices for impact assessment and audit of technology, and identifies gaps in current AI ethics tools in auditing and risk assessment that should be considered going forward.

A Decentralized Approach Towards Responsible AI in Social Ecosystems

This paper proposes computational human agency and regulation as main mechanisms of intervention and proposes a decentralized computational infrastructure, or a set of public utilities, as the computational means to bridge the gap between a technical system with the social system that it will be deployed to.

Trustworthy AI: A Computational Perspective

A comprehensive appraisal of trustworthy AI from a computational perspective to help readers understand the latest technologies for achieving trustworthy AI and focuses on six of the most crucial dimensions.

Never trust, always verify : a roadmap for Trustworthy AI?

Trust in the context of AI-based systems is examined to understand what it means for an AI system to be trustworthy and identify actions that need to be undertaken to ensure that AI systems are trustworthy.
...

References

SHOWING 1-10 OF 251 REFERENCES

Increasing Trust in AI Services through Supplier's Declarations of Conformity

This paper envisiones an SDoC for AI services to contain purpose, performance, safety, security, and provenance information to be completed and voluntarily released by AI service providers for examination by consumers.

Closing the AI accountability gap: defining an end-to-end framework for internal algorithmic auditing

The proposed auditing framework is intended to contribute to closing the accountability gap in the development and deployment of large-scale artificial intelligence systems by embedding a robust process to ensure audit integrity.

Principles alone cannot guarantee ethical AI

Significant differences exist between medical practice and AI development that suggest a principled approach may not work in the case of AI, and Brent Mittelstadt highlights these differences.

The relationship between trust in AI and trustworthy machine learning technologies

This paper provides a systematic approach to relate considerations about trust from the social sciences to trustworthiness technologies proposed for AI-based services and products and introduces the concept of Chain of Trust.

Algorithmic Accountability in the Administrative State

How will artificial intelligence (AI) transform government? Stemming from a major study commissioned by the Administrative Conference of the United States (ACUS), we highlight the promise and

The Windfall Clause: Distributing the Benefits of AI for the Common Good

The Windfall Clause is offered, which is an ex ante commitment by AI firms to donate a significant amount of any eventual extremely large profits to benefit humanity broadly, with particular attention towards mitigating any downsides from deployment of windfall-generating AI.

The global landscape of AI ethics guidelines

A detailed analysis of 84 AI ethics reports around the world finds a convergence around core principles but substantial divergence on practical implementation, highlighting the importance of integrating guideline-development efforts with substantive ethical analysis and adequate implementation strategies.

Linking Artificial Intelligence Principles

LAIP is introduced, an effort and platform for linking and analyzing different Artificial Intelligence Principles, and argues for the necessity of incorporating various AI Principles into a comprehensive framework and focusing on how they can interact and complete each other.

The Politics of Verification

How to evaluate compliance is among the most difficult questions that arise during treaty negotiations and ratification debates. Arguments over verification principles and procedures are increasingly

Accountable Algorithms

Many important decisions historically made by people are now made by computers. Algorithms count votes, approve loan and credit card applications, target citizens or neighborhoods for police
...