• Corpus ID: 220042122

A Methodology for Creating AI FactSheets

@article{Richards2020AMF,
  title={A Methodology for Creating AI FactSheets},
  author={John T. Richards and David Piorkowski and Michael Hind and Stephanie Houde and Aleksandra Mojsilovi'c},
  journal={ArXiv},
  year={2020},
  volume={abs/2006.13796}
}
As AI models and services are used in a growing number of highstakes areas, a consensus is forming around the need for a clearer record of how these models and services are developed to increase trust. Several proposals for higher quality and more consistent AI documentation have emerged to address ethical and legal concerns and general social impacts of such systems. However, there is little published work on how to create this documentation. This is the first work to describe a methodology… 

A Human-Centered Methodology for Creating AI FactSheets

A methodology for creating the form of AI documentation the authors call FactSheets is described and the issues to consider and the questions to explore with the relevant people in an organization who will be creating and consuming AI facts are described.

Aspirations and Practice of Model Documentation: Moving the Needle with Nudging and Traceability

A prototype tool named DocML is designed following a set of design guidelines that aim to support the documentation practice for machine learning models including the collocation of documentation environment with the coding environment, nudging the consideration of model card sections during model development, and documentation derived from and traced to the source.

The Sanction of Authority: Promoting Public Trust in AI

It is argued that being accountable to the public in ways that earn their trust, through elaborating rules for AI and developing resources for enforcing these rules, is what will ultimately make AI trustworthy enough to be woven into the fabric of the authors' society.

Fiduciary Responsibility: Facilitating Public Trust in Automated Decision Making

—Automated decision-making systems are being in- creasingly deployed and affect the public in a multitude of positive and negative ways. Governmental and private institutions use these systems to

Artificial Intelligence Measurement and Evaluation Workshop Summary

  • Business
  • 2022
The panelists highlighted a challenge in the AI system measurement space: unknown-unknowns (the things we do not know, that we do not know) in ML cause cascading problems in the models. Resolving

Investigating Explainability of Generative AI for Code through Scenario-based Design

This work explores explainability needs for GenAI for code and demonstrates how human-centered approaches can drive the technical development of XAI in novel domains.

Characterizing and Detecting Mismatch in Machine-Learning-Enabled Systems

It is shown that how each role prioritizes the importance of relevant mismatches varies, potentially contributing to these mismatched assumptions, and the mismatch categories identified can be specified as machine readable descriptors contributing to improved ML-enabled system development.

Fifty Shades of Grey: In Praise of a Nuanced Approach Towards Trustworthy Design

It is argued that certain features which are commonly seen to benefit trust appear applicable to promoting trust in and through Virtual Labs, but whether they promote trust is a function of how systematically designers consider various (potentially conflicting) stakeholder trust needs.

A Literature Review on Ethics for AI in Biomedical Research and Biobanking

The review revealed current ‘hot’ topics in AI ethics related to biomedical research and showed the need for an ethical-mindful and balanced approach to AI in biomedical research, and revealed the need on understanding and resolving practical problems arising from the use of AI in science and society.

Bias Impact Analysis of AI in Consumer Mobile Health Technologies: Legal, Technical, and Policy

This work examines the intersection of algorithmic bias in consumer mobile health technologies (mHealth), a term used to describe mobile technology and associated sensors to provide healthcare solutions through patient journeys, and explores to what extent current mechanisms help mitigate potential risks associated with unwanted bias in intelligent systems that make up the mHealth domain.

References

SHOWING 1-10 OF 13 REFERENCES

Experiences with Improving the Transparency of AI Models and Services

A clearer picture is assembled of needs and the various challenges faced in creating accurate and useful AI documentation and recommendations for easing the collection and flexible presentation of AI facts to promote transparency are made.

Increasing Trust in AI Services through Supplier's Declarations of Conformity

This paper envisiones an SDoC for AI services to contain purpose, performance, safety, security, and provenance information to be completed and voluntarily released by AI service providers for examination by consumers.

ABOUT ML: Annotation and Benchmarking on Understanding and Transparency of Machine Learning Lifecycles

The case is made for the project's relevance and effectiveness in consolidating disparate efforts across a variety of stakeholders, as well as bringing in the perspectives of currently missing voices that will be valuable in shaping future conversations.

Model Cards for Model Reporting

This work proposes model cards, a framework that can be used to document any trained machine learning model in the application fields of computer vision and natural language processing, and provides cards for two supervised models: One trained to detect smiling faces in images, and one training to detect toxic comments in text.

The Dataset Nutrition Label: A Framework To Drive Higher Data Quality Standards

The Dataset Nutrition Label is a diagnostic framework that lowers the barrier to standardized data analysis by providing a distilled yet comprehensive overview of dataset "ingredients" before AI model development.

General data protection regulation

Presentacio sobre l'Oficina de Proteccio de Dades Personals de la UAB i la politica Open Science. Va formar part de la conferencia "Les politiques d'Open Data / Open Acces: Implicacions a la recerca"

Datasheets for datasets

Documentation to facilitate communication between dataset creators and consumers and consumers is presented.

Data Statements for Natural Language Processing: Toward Mitigating System Bias and Enabling Better Science

It is argued that data statements will help alleviate issues related to exclusion and bias in language technology, lead to better precision in claims about how natural language processing research can generalize and thus better engineering results, protect companies from public embarrassment, and ultimately lead to language technology that meets its users in their own preferred linguistic style.

User Centered Design

United States Consumer Product Safety Commission. Testing and certification