Uncertainty as a Form of Transparency: Measuring, Communicating, and Using Uncertainty

@article{Bhatt2021UncertaintyAA,
  title={Uncertainty as a Form of Transparency: Measuring, Communicating, and Using Uncertainty},
  author={Umang Bhatt and Yunfeng Zhang and Javier Antor{\'a}n and Qingzi Vera Liao and Prasanna Sattigeri and Riccardo Fogliato and Gabrielle Gauthier Melançon and Ranganath Krishnan and Jason Stanley and Omesh Tickoo and Lama Nachman and Rumi Chunara and Adrian Weller and Alice Xiang},
  journal={Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society},
  year={2021}
}
Algorithmic transparency entails exposing system properties to various stakeholders for purposes that include understanding, improving, and contesting predictions. Until now, most research into algorithmic transparency has predominantly focused on explainability. Explainability attempts to provide reasons for a machine learning model's behavior to stakeholders. However, understanding a model's specific behavior alone might not be enough for stakeholders to gauge whether the model is wrong or… Expand

Figures and Tables from this paper

Alfabetización estadística y comunicación de riesgo para la vacunación contra la COVID-19: una revisión de alcance
TLDR
La alfabetización estadística desempeña un papel clave en the comunicación of los riesgos relacionados with the salud en general and the vacunación contra the COVID-19 en particular. Expand
Bayesian Deep Learning via Subnetwork Inference
TLDR
This work shows that it suffices to perform inference over a small subset of model weights in order to obtain accurate predictive posteriors, and proposes a subnetwork selection strategy that aims to maximally preserve the model’s predictive uncertainty. Expand
Can We Leverage Predictive Uncertainty to Detect Dataset Shift and Adversarial Examples in Android Malware Detection?
TLDR
An empirical study to evaluate the quality of predictive uncertainties of malware detectors and finds that predictive uncertainty indeed helps achieve reliable malware detection in the presence of dataset shift, but cannot cope with adversarial evasion attacks. Expand
EXPLAINING UNCERTAINTY ESTIMATES
Both uncertainty estimation and interpretability are important factors for trustworthy machine learning systems. However, there is little work at the intersection of these two areas. We address thisExpand
Evaluating subgroup disparity using epistemic uncertainty in mammography
TLDR
This paper explores how epistemic uncertainty can be used to evaluate disparity in patient demographics (race) and data acquisition (scanner) subgroups for breast density assessment on a dataset of 108,190 mammograms collected from 33 clinical sites. Expand
Fair Conformal Predictors for Applications in Medical Imaging
TLDR
This paper explores how conformal methods can complement deep learning models by providing both clinically intuitive way of expressing model uncertainty as well as facilitating model transparency in clinical workflows and finds that a conformal predictions to be a promising framework with potential to increase clinical usability and transparency for better collaboration between deep learning algorithms and clinicians. Expand
Goldilocks: Consistent Crowdsourced Scalar Annotations with Relative Uncertainty
TLDR
Goldilocks is presented, a novel crowd rating elicitation technique for collecting calibrated scalar annotations that also distinguishes inherent ambiguity from inter-annotator disagreement and can improve consistency in domains where interpretation of the scale is not universal. Expand
Improving Robustness and Efficiency in Active Learning with Contrastive Loss
This paper introduces supervised contrastive active learning (SCAL) by leveraging the contrastive loss for active learning in a supervised setting. We propose efficient query strategies in activeExpand
Machine Learning Practices Outside Big Tech: How Resource Constraints Challenge Responsible Development
TLDR
A number of tensions which are introduced or exacerbated by these organizations' resource constraints are uncovered---tensions between privacy and ubiquity, resource management and performance optimization, and access and monopolization. Expand
Mitigating Sampling Bias and Improving Robustness in Active Learning
TLDR
This paper presents simple and efficient methods to mitigate sampling bias in active learning while achieving state-of-the-art accuracy and model robustness and proposes an unbiased query strategy that selects informative data samples of diverse feature representations with SCAL and DMM. Expand
...
1
2
...

References

SHOWING 1-10 OF 242 REFERENCES
Communicating scientific uncertainty
TLDR
A protocol for summarizing the many possible sources of uncertainty in standard terms is offered, designed to impose a minimal burden on scientists, while gradually educating those whose decisions depend on their work. Expand
Transparency: Motivations and Challenges
TLDR
This work highlights and review settings where transparency may cause harm, discussing connections across privacy, multi-agent game theory, economics, fairness and trust. Expand
Visualizing Uncertainty About the Future
TLDR
This review of current practice for communicating uncertainties visually, using examples drawn from sport, weather, climate, health, economics, and politics, shows how the effectiveness of some graphics clearly depends on the relative numeracy of an audience. Expand
With Malice Towards None: Assessing Uncertainty via Equalized Coverage
TLDR
This work presents an operational methodology that achieves equitable treatment by offering rigorous distribution-free coverage guarantees holding in finite samples, and test the applicability of the proposed framework on real data, demonstrating that equalized coverage constructs unbiased prediction intervals, unlike competitive methods. Expand
The effects of communicating uncertainty on public trust in facts and numbers
TLDR
Examination of communicating epistemic uncertainty about facts across different topics shows that whereas people do perceive greater uncertainty when it is communicated, there is only a small decrease in trust in numbers and trustworthiness of the source, and mostly for verbal uncertainty communication. Expand
Predictive Uncertainty Estimation via Prior Networks
TLDR
This work proposes a new framework for modeling predictive uncertainty called Prior Networks (PNs) which explicitly models distributional uncertainty by parameterizing a prior distribution over predictive distributions and evaluates PNs on the tasks of identifying out-of-distribution samples and detecting misclassification on the MNIST dataset, where they are found to outperform previous methods. Expand
Effect of confidence and explanation on accuracy and trust calibration in AI-assisted decision making
TLDR
It is shown that confidence score can help calibrate people's trust in an AI model, but trust calibration alone is not sufficient to improve AI-assisted decision making, which may also depend on whether the human can bring in enough unique knowledge to complement the AI's errors. Expand
Increasing Trust in AI Services through Supplier's Declarations of Conformity
TLDR
This paper envisiones an SDoC for AI services to contain purpose, performance, safety, security, and provenance information to be completed and voluntarily released by AI service providers for examination by consumers. Expand
Machine Learning Explainability for External Stakeholders
TLDR
A closed-door, day-long workshop between academics, industry experts, legal scholars, and policymakers to develop a shared language around explainability and to understand the current shortcomings of and potential solutions for deploying explainable machine learning in service of transparency goals is conducted. Expand
Explainable machine learning in deployment
TLDR
This study explores how organizations view and use explainability for stakeholder consumption, and synthesizes the limitations of current explainability techniques that hamper their use for end users. Expand
...
1
2
3
4
5
...