• Corpus ID: 232403994

A Multistakeholder Approach Towards Evaluating AI Transparency Mechanisms

@article{Lucic2021AMA,
  title={A Multistakeholder Approach Towards Evaluating AI Transparency Mechanisms},
  author={Ana Lucic and Madhulika Srikumar and Umang Bhatt and Alice Xiang and Ankur Taly and Qingzi Vera Liao and M. de Rijke},
  journal={ArXiv},
  year={2021},
  volume={abs/2103.14976}
}
Given that there are a variety of stakeholders involved in, and affected by, decisions from machine learning (ML) models, it is important to consider that different stakeholders have different transparency needs [14]. Previous work found that the majority of deployed transparency mechanisms primarily serve technical stakeholders [2]. In our work, we want to investigate how well transparency mechanisms might work in practice for a more diverse set of stakeholders by conducting a large-scale… 
Uncertainty as a Form of Transparency: Measuring, Communicating, and Using Uncertainty
TLDR
This work describes how uncertainty can be used to mitigate model unfairness, augment decision-making, and build trustworthy systems and outlines methods for displaying uncertainty to stakeholders and recommends how to collect information required for incorporating uncertainty into existing ML pipelines.
Towards the Use of Saliency Maps for Explaining Low-Quality Electrocardiograms to End Users
When using medical images for diagnosis, either by clinicians or artificial intelligence (AI) systems, it is important that the images are of high quality. When an image is of low quality, the

References

SHOWING 1-10 OF 19 REFERENCES
Machine Learning Explainability for External Stakeholders
TLDR
A closed-door, day-long workshop between academics, industry experts, legal scholars, and policymakers to develop a shared language around explainability and to understand the current shortcomings of and potential solutions for deploying explainable machine learning in service of transparency goals is conducted.
Metrics for Explainable AI: Challenges and Prospects
TLDR
This paper discusses specific methods for evaluating the goodness of explanations, whether users are satisfied by explanations, how well users understand the AI systems, and how the human-XAI work system performs.
Explainable machine learning in deployment
TLDR
This study explores how organizations view and use explainability for stakeholder consumption, and synthesizes the limitations of current explainability techniques that hamper their use for end users.
Toward Algorithmic Accountability in Public Services: A Qualitative Study of Affected Community Perspectives on Algorithmic Decision-making in Child Welfare Services
TLDR
The findings indicate that general distrust in the existing system contributes significantly to low comfort in algorithmic decision-making and identifies strategies for improving comfort through greater transparency and improved communication strategies.
Counterfactual Explanations Without Opening the Black Box: Automated Decisions and the GDPR
TLDR
It is suggested data controllers should offer a particular type of explanation, unconditional counterfactual explanations, to support these three aims, which describe the smallest change to the world that can be made to obtain a desirable outcome, or to arrive at the closest possible world, without needing to explain the internal logic of the system.
A Survey of Methods for Explaining Black Box Models
TLDR
A classification of the main problems addressed in the literature with respect to the notion of explanation and the type of black box system is provided to help the researcher to find the proposals more useful for his own work.
Questioning the AI: Informing Design Practices for Explainable AI User Experiences
TLDR
An algorithm-informed XAI question bank is developed in which user needs for explainability are represented as prototypical questions users might ask about the AI, and used as a study probe to identify gaps between current XAI algorithmic work and practices to create explainable AI products.
Interpreting Interpretability: Understanding Data Scientists' Use of Interpretability Tools for Machine Learning
TLDR
It is indicated that data scientists over-trust and misuse interpretability tools, and few of their participants were able to accurately describe the visualizations output by these tools.
Towards A Rigorous Science of Interpretable Machine Learning
TLDR
This position paper defines interpretability and describes when interpretability is needed (and when it is not), and suggests a taxonomy for rigorous evaluation and exposes open questions towards a more rigorous science of interpretable machine learning.
FOCUS: Flexible Optimizable Counterfactual Explanations for Tree Ensembles.
TLDR
A simple approximation technique is introduced that is effective for finding counterfactual explanations for predictions of the original model using a range of distance metrics and is significantly closer to the original instances compared to other methods designed for tree ensembles for four distance metrics.
...
...