Corpus ID: 215768885

Toward Trustworthy AI Development: Mechanisms for Supporting Verifiable Claims

@article{Brundage2020TowardTA,
  title={Toward Trustworthy AI Development: Mechanisms for Supporting Verifiable Claims},
  author={Miles Brundage and Shahar Avin and Jasmine Wang and Haydn Belfield and Gretchen Krueger and Gillian K. Hadfield and Heidy Khlaaf and Jingying Yang and Helen Toner and Ruth Fong and Tegan Maharaj and Pang Wei Koh and Sara Hooker and Jade Leung and Andrew Trask and Emma Bluemke and Jonathan Lebensbold and Cullen O'Keefe and Mark Koren and Theo Ryffel and J. B. Rubinovitz and Tamay Besiroglu and Federica Carugati and Jack Clark and Peter Eckersley and Sarah de Haas and Maritza L. Johnson and Ben Laurie and Alex Ingerman and Igor Krawczuk and Amanda Askell and Rosario Cammarota and Andrew J. Lohn and David Krueger and Charlotte Stix and Peter Henderson and Logan Graham and Carina E. A. Prunkl and Bianca Martin and Elizabeth Seger and Noa Zilberman and Se'an 'O h'Eigeartaigh and Frens Kroeger and Girish Sastry and Rebecca Kagan and Adrian Weller and Brian Tse and Elizabeth Barnes and Allan Dafoe and Paul Scharre and Ariel Herbert-Voss and Martijn Rasser and Shagun Sodhani and Carrick Flynn and Thomas Krendl Gilbert and Lisa Dyer and Saif Khan and Yoshua Bengio and Markus Anderljung},
  journal={ArXiv},
  year={2020},
  volume={abs/2004.07213}
}
With the recent wave of progress in artificial intelligence (AI) has come a growing awareness of the large-scale impacts of AI systems, and recognition that existing regulations and norms in industry and academia are insufficient to ensure responsible AI development. In order for AI developers to earn trust from system users, customers, civil society, governments, and other stakeholders that they are building AI responsibly, they will need to make verifiable claims to which they can be held… 
Trustworthy AI: From Principles to Practices
  • Bo Li, Peng Qi, +5 authors Bowen Zhou
  • Computer Science
    ArXiv
  • 2021
TLDR
A systematic approach is proposed that considers the entire lifecycle of AI systems, ranging from data acquisition to model development, to development and deployment, finally to continuous monitoring and governance, and identifies key opportunities and challenges in the future development of trustworthy AI systems.
AI Certification: Advancing Ethical Practice by Reducing Information Asymmetries
TLDR
O Ongoing changes in AI technology suggest that AI certification regimes should be designed to emphasize governance criteria of enduring value, such as ethics training for AI developers, and to adjust technical criteria as the technology changes.
Responsible AI: A Primer for the Legal Community
TLDR
The legal community should have a good understanding of the responsible development and deployment of artificial intelligence in order to inform, translate, and advise on the legal implications of AI systems.
Trustworthy AI Inference Systems: An Industry Research View
TLDR
An industry research view for approaching the design, deployment, and operation of trustworthy Artificial Intelligence (AI) inference systems, which highlights opportunities and challenges in AI systems using trusted execution environments combined with more recent advances in cryptographic techniques to protect data in use.
Situated Accountability: Ethical Principles, Certification Standards, and Explanation Methods in Applied AI
TLDR
This study illustrates important flaws in the current enactment of accountability as an ethical and social value which, if left unchecked, risks undermining the pursuit of responsible AI.
Audit and Assurance of AI Algorithms: A framework to ensure ethical algorithmic practices in Artificial Intelligence
TLDR
The critical areas required for auditing and assurance of algorithms developed to professionalize and industrialize AI, machine learning, and related algorithms are reviewed and spark discussion in this novel field of study and practice.
Putting AI ethics to work: are the tools fit for purpose?
TLDR
An assessment of these practical frameworks with the lens of known best practices for impact assessment and audit of technology, and identifies gaps in current AI ethics tools in auditing and risk assessment that should be considered going forward.
Ethical machines: The human-centric use of artificial intelligence
TLDR
The criticality and urgency to engage multi-disciplinary teams of researchers, practitioners, policy makers, and citizens to co-develop and evaluate in the real-world algorithmic decision-making processes designed to maximize fairness, accountability, and transparency while respecting privacy is highlighted.
A Decentralized Approach Towards Responsible AI in Social Ecosystems
TLDR
A framework that provides computational facilities for parties in a social ecosystem to produce the desired responsible AI behaviors and argues the case that a decentralized approach is the most promising path towards Responsible AI from both the computer science and social science perspectives.
Z-Inspection®: A Process to Assess Trustworthy AI
TLDR
This article outlines a novel process based on applied ethics, namely, Z-Inspection®, to assess if an AI system is trustworthy, and is the first process to assess trustworthy AI in practice.
...
1
2
3
4
5
...

References

SHOWING 1-10 OF 250 REFERENCES
Increasing Trust in AI Services through Supplier's Declarations of Conformity
TLDR
This paper envisiones an SDoC for AI services to contain purpose, performance, safety, security, and provenance information to be completed and voluntarily released by AI service providers for examination by consumers.
Closing the AI accountability gap: defining an end-to-end framework for internal algorithmic auditing
TLDR
The proposed auditing framework is intended to contribute to closing the accountability gap in the development and deployment of large-scale artificial intelligence systems by embedding a robust process to ensure audit integrity.
Principles Alone Cannot Guarantee Ethical AI
Artificial intelligence (AI) ethics is now a global topic of discussion in academic and policy circles. At least 84 public–private initiatives have produced statements describing high-level
The relationship between trust in AI and trustworthy machine learning technologies
TLDR
This paper provides a systematic approach to relate considerations about trust from the social sciences to trustworthiness technologies proposed for AI-based services and products and introduces the concept of Chain of Trust.
Algorithmic Accountability in the Administrative State
How will artificial intelligence (AI) transform government? Stemming from a major study commissioned by the Administrative Conference of the United States (ACUS), we highlight the promise and
The Windfall Clause: Distributing the Benefits of AI for the Common Good
TLDR
The Windfall Clause is offered, which is an ex ante commitment by AI firms to donate a significant amount of any eventual extremely large profits to benefit humanity broadly, with particular attention towards mitigating any downsides from deployment of windfall-generating AI.
The global landscape of AI ethics guidelines
In the past five years, private companies, research institutions and public sector organizations have issued principles and guidelines for ethical artificial intelligence (AI). However, despite an
Linking Artificial Intelligence Principles
TLDR
LAIP is introduced, an effort and platform for linking and analyzing different Artificial Intelligence Principles, and argues for the necessity of incorporating various AI Principles into a comprehensive framework and focusing on how they can interact and complete each other.
The Politics of Verification
How to evaluate compliance is among the most difficult questions that arise during treaty negotiations and ratification debates. Arguments over verification principles and procedures are increasingly
Accountable Algorithms
Many important decisions historically made by people are now made by computers. Algorithms count votes, approve loan and credit card applications, target citizens or neighborhoods for police
...
1
2
3
4
5
...