• Corpus ID: 232403994

A Multistakeholder Approach Towards Evaluating AI Transparency Mechanisms

@article{Lucic2021AMA,
  title={A Multistakeholder Approach Towards Evaluating AI Transparency Mechanisms},
  author={Ana Lucic and Madhulika Srikumar and Umang Bhatt and Alice Xiang and Ankur Taly and Qingzi Vera Liao and M. de Rijke},
  journal={ArXiv},
  year={2021},
  volume={abs/2103.14976}
}
Given that there are a variety of stakeholders involved in, and affected by, decisions from machine learning (ML) models, it is important to consider that different stakeholders have different transparency needs [14]. Previous work found that the majority of deployed transparency mechanisms primarily serve technical stakeholders [2]. In our work, we want to investigate how well transparency mechanisms might work in practice for a more diverse set of stakeholders by conducting a large-scale… 
Uncertainty as a Form of Transparency: Measuring, Communicating, and Using Uncertainty
TLDR
This work describes how uncertainty can be used to mitigate model unfairness, augment decision-making, and build trustworthy systems and outlines methods for displaying uncertainty to stakeholders and recommends how to collect information required for incorporating uncertainty into existing ML pipelines.

References

SHOWING 1-10 OF 19 REFERENCES
Machine Learning Explainability for External Stakeholders
TLDR
A closed-door, day-long workshop between academics, industry experts, legal scholars, and policymakers to develop a shared language around explainability and to understand the current shortcomings of and potential solutions for deploying explainable machine learning in service of transparency goals is conducted.
Metrics for Explainable AI: Challenges and Prospects
TLDR
This paper discusses specific methods for evaluating the goodness of explanations, whether users are satisfied by explanations, how well users understand the AI systems, and how the human-XAI work system performs.
Counterfactual Explanations Without Opening the Black Box: Automated Decisions and the GDPR
TLDR
It is suggested data controllers should offer a particular type of explanation, unconditional counterfactual explanations, to support these three aims, which describe the smallest change to the world that can be made to obtain a desirable outcome, or to arrive at the closest possible world, without needing to explain the internal logic of the system.
A Survey of Methods for Explaining Black Box Models
TLDR
A classification of the main problems addressed in the literature with respect to the notion of explanation and the type of black box system is provided to help the researcher to find the proposals more useful for his own work.
Questioning the AI: Informing Design Practices for Explainable AI User Experiences
TLDR
An algorithm-informed XAI question bank is developed in which user needs for explainability are represented as prototypical questions users might ask about the AI, and used as a study probe to identify gaps between current XAI algorithmic work and practices to create explainable AI products.
Toward Algorithmic Accountability in Public Services: A Qualitative Study of Affected Community Perspectives on Algorithmic Decision-making in Child Welfare Services
TLDR
The findings indicate that general distrust in the existing system contributes significantly to low comfort in algorithmic decision-making and identifies strategies for improving comfort through greater transparency and improved communication strategies.
"Why Should I Trust You?": Explaining the Predictions of Any Classifier
TLDR
LIME is proposed, a novel explanation technique that explains the predictions of any classifier in an interpretable and faithful manner, by learning aninterpretable model locally varound the prediction.
Model Cards for Model Reporting
TLDR
This work proposes model cards, a framework that can be used to document any trained machine learning model in the application fields of computer vision and natural language processing, and provides cards for two supervised models: One trained to detect smiling faces in images, and one training to detect toxic comments in text.
Interpreting Interpretability: Understanding Data Scientists' Use of Interpretability Tools for Machine Learning
TLDR
It is indicated that data scientists over-trust and misuse interpretability tools, and few of their participants were able to accurately describe the visualizations output by these tools.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
  • C. Rudin
  • Computer Science
    Nat. Mach. Intell.
  • 2019
TLDR
This Perspective clarifies the chasm between explaining black boxes and using inherently interpretable models, outlines several key reasons why explainable black boxes should be avoided in high-stakes decisions, identifies challenges to interpretable machine learning, and provides several example applications whereinterpretable models could potentially replace black box models in criminal justice, healthcare and computer vision.
...
1
2
...