Measuring Model Biases in the Absence of Ground Truth

@article{Aka2021MeasuringMB,
  title={Measuring Model Biases in the Absence of Ground Truth},
  author={Osman Aka and Ken Burke and Alex Bauerle and Christina Greer and Margaret Mitchell},
  journal={Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society},
  year={2021}
}
The measurement of bias in machine learning often focuses on model performance across identity subgroups (such as man and woman) with respect to groundtruth labels. However, these methods do not directly measure the associations that a model may have learned, for example between labels and identity subgroups. Further, measuring a model's bias requires a fully annotated evaluation dataset which may not be easily available in practice. We present an elegant mathematical solution that tackles both鈥β
Visual Identification of Problematic Bias in Large Label Spaces
TLDR
Different models and datasets for large label spaces can be systematically and visually analyzed and compared to make informed fairness assessments tackling problematic bias, and the approach can be integrated into classical model and data pipelines.
Social Norm Bias: Residual Harms of Fairness-Aware Algorithms
TLDR
This work characterize Social Norm Bias (SNoB), a subtle but consequential type of discrimination that may be exhibited by automated decision-making systems, even when these systems achieve group fairness objectives, by measuring how an algorithm鈥檚 predictions are associated with conformity to gender norms, which is measured using a machine learning approach.

References

SHOWING 1-10 OF 30 REFERENCES
The Open Images Dataset V4: Unified image classification, object detection, and visual relationship detection at scale
  • 2018
Equality of Opportunity in Supervised Learning
TLDR
This work proposes a criterion for discrimination against a specified sensitive attribute in supervised learning, where the goal is to predict some target based on available features and shows how to optimally adjust any learned predictor so as to remove discrimination according to this definition.
Word Association Norms, Mutual Information and Lexicography
TLDR
The proposed measure, the association ratio, estimates word association norms directly from computer readable corpora, making it possible to estimate norms for tens of thousands of words.
Predictive Inequity in Object Detection
TLDR
This work annotates an existing large scale dataset which contains pedestrians with Fitzpatrick skin tones in ranges [1-3] or [4-6], and provides an in-depth comparative analysis of performance between these two skin tone groupings, finding that neither time of day nor occlusion explain this behavior.
Amazon's Face Recognition Falsely Matched 28 Members of
  • Congress With Mugshots
  • 2018
Amazon鈥檚 Face Recognition Falsely Matched 28 Members of Congress With Mugshots
  • 2018
ConvNets and ImageNet Beyond Accuracy: Understanding Mistakes and Uncovering Biases
TLDR
It is experimentally demonstrated that the accuracy and robustness of ConvNets measured on Imagenet are vastly underestimated and that explanations can mitigate the impact of misclassified adversarial examples from the perspective of the end-user.
Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification
TLDR
It is shown that the highest error involves images of dark-skinned women, while the most accurate result is for light-skinned men, in commercial API-based classifiers of gender from facial images, including IBM Watson Visual Recognition.
Women also Snowboard: Overcoming Bias in Captioning Models
TLDR
A new Equalizer model is introduced that ensures equal gender probability when gender Evidence is occluded in a scene and confident predictions when gender evidence is present and has lower error than prior work when describing images with people and mentioning their gender and more closely matches the ground truth ratio of sentences including women to sentences including men.
Fair prediction with disparate impact: A study of bias in recidivism prediction instruments
TLDR
It is demonstrated that the criteria cannot all be simultaneously satisfied when recidivism prevalence differs across groups, and how disparate impact can arise when an RPI fails to satisfy the criterion of error rate balance.
...
1
2
3
...