Saving Face: Investigating the Ethical Concerns of Facial Recognition Auditing

@article{Raji2020SavingFI,
  title={Saving Face: Investigating the Ethical Concerns of Facial Recognition Auditing},
  author={Inioluwa Deborah Raji and Timnit Gebru and Margaret Mitchell and Joy Buolamwini and Joonseok Lee and Emily L. Denton},
  journal={Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society},
  year={2020}
}
Although essential to revealing biased performance, well intentioned attempts at algorithmic auditing can have effects that may harm the very populations these measures are meant to protect. This concern is even more salient while auditing biometric systems such as facial recognition, where the data is sensitive and the technology is often used in ethically questionable manners. We demonstrate a set of fiveethical concerns in the particular case of auditing commercial facial processing… Expand
A Human in the Loop is Not Enough: The Need for Human-Subject Experiments in Facial Recognition
TLDR
This position paper argues for the necessity of empirical studies of human-in-the-loop facial recognition systems and outlines several technical and ethical challenges that arise when conducting such empirical studies and when interpreting their results. Expand
A Mental Trespass? Unveiling Truth, Exposing Thoughts and Threatening Civil Liberties with Non-Invasive AI Lie Detection
TLDR
This paper argues why artificial intelligence-based, non-invasive lie detection technologies are likely to experience a rapid advancement in the coming years, and that it would be irresponsible to wait any longer before discussing its implications. Expand
Covert Embodied Choice: Decision-Making and the Limits of Privacy Under Biometric Surveillance
TLDR
This work examines how individuals adjust their behavior when incentivized to avoid the algorithmic prediction of their intent, and presents results from a virtual reality task in which gaze, movement, and other physiological signals are tracked. Expand
It’s Not Just Black and White: Classifying Defendant Mugshots Based on the Multidimensionality of Race and Ethnicity
TLDR
This work uses the case of defendant mugshots from Miami–Dade County’s (Florida, U.S.) criminal justice system to develop a novel technique for generating multidimensional race–ethnicity classifications for four groups: Black Hispanic, White Hispanic, Black non–Hispanic, and White non-Hispanic. Expand
Mitigating dataset harms requires stewardship: Lessons from 1000 papers
TLDR
This work study three influential face and person recognition datasets—DukeMTMC, MS-Celeb-1M, and Labeled Faces in the Wild—by analyzing nearly 1000 papers that cite them, finding that the creation of derivative datasets and models, broader technological and social change, and dataset management practices can introduce a wide range of ethical concerns. Expand
Machines Learn Appearance Bias in Face Recognition
TLDR
A transfer learning model trained on human subjects' first impressions of personality traits in other faces is better at judging a person's dominance based on their face than other traits like trustworthiness or likeability, even for emotionally neutral faces. Expand
Legal and Ethical Challenges in Multimedia Research
TLDR
This position article aims to increase the awareness of such concepts and existing legal constraints in the multimedia research community, initiate a discussion on community guidelines on how to conduct multimedia research in a lawful andethical manner, and identify some important research directions to support a vision of lawful and ethical multimedia research. Expand
Casual Conversations: A dataset for measuring fairness in AI
This paper introduces a novel "fairness" dataset to measure the robustness of AI models to a diverse set of age, genders, apparent skin tones and ambient lighting conditions. Our dataset is composedExpand
Towards measuring fairness in AI: the Casual Conversations dataset
TLDR
A novel dataset to help researchers evaluate their computer vision and audio models for accuracy across a diverse set of age, genders, apparent skin tones and ambient lighting conditions and a through analysis on these models in terms of fair treatment of people from various backgrounds is provided. Expand
Bias Silhouette Analysis: Towards Assessing the Quality of Bias Metrics for Word Embedding Models
Word embedding models reflect bias towards genders, ethnicities, and other social groups present in the underlying training data. Metrics such as ECT, RNSB, and WEAT quantify bias in these modelsExpand
...
1
2
3
4
5
...

References

SHOWING 1-10 OF 61 REFERENCES
Actionable Auditing: Investigating the Impact of Publicly Naming Biased Performance Results of Commercial AI Products
TLDR
The audit design and structured disclosure procedure used in the Gender Shades study is outlined, and new performance metrics from targeted companies IBM, Microsoft and Megvii (Face++) on the Pilot Parliaments Benchmark (PPB) as of August 2018 are presented. Expand
How Computers See Gender
TLDR
It is found that FA services performed consistently worse on transgender individuals and were universally unable to classify non-binary genders, and that user perceptions about gender performance and identity contradict the way gender performance is encoded into the computer vision infrastructure. Expand
The Misgendering Machines
  • Os Keyes
  • Psychology, Computer Science
  • Proc. ACM Hum. Comput. Interact.
  • 2018
TLDR
It is shown that AGR consistently operationalises gender in a trans-exclusive way, and consequently carries disproportionate risk for trans people subject to it. Expand
How Computers See Gender: An Evaluation of Gender Classification in Commercial Facial Analysis and Image Labeling Services
Investigations of facial analysis (FA) technologies—such as facial detection and facial recognition—have been central to discussions about Artificial Intelligence’s (AI) impact on human beings.Expand
Gender Recognition or Gender Reductionism?: The Social Implications of Embedded Gender Recognition Systems
TLDR
It is found that transgender individuals have overwhelmingly negative attitudes towards AGR and fundamentally question whether it can accurately recognize such a subjective aspect of their identity. Expand
Oxford Handbook on AI Ethics Book Chapter on Race and Gender
TLDR
A holistic and multifaceted approach is needed to alleviate bias in machine learning systems, including standardization bodies determining what types of systems can be used in which scenarios, making sure that automated decision tools are created by people from diverse backgrounds, and understanding the historical and political factors that disadvantage certain groups who are subjected to these tools. Expand
Where fairness fails: data, algorithms, and the limits of antidiscrimination discourse
ABSTRACT Problems of bias and fairness are central to data justice, as they speak directly to the threat that ‘big data’ and algorithmic decision-making may worsen already existing injustices. In theExpand
Face Recognition Algorithm Bias: Performance Differences on Images of Children and Adults
TLDR
This work identifies the best score-level fusion technique for the child demographic and shows a negative bias for each algorithm on children, further supporting the need for a deeper investigation into algorithm bias as a function of age cohorts. Expand
Deep Learning for Face Recognition: Pride or Prejudiced?
TLDR
A better understanding of state-of-the-art deep learning networks would enable researchers to address the given challenge of bias in AI, and develop fairer systems. Expand
Diversity in Faces
TLDR
Diversity in Faces (DiF) provides a data set of one million annotated human face images for advancing the study of facial diversity, and believes that by making the extracted coding schemes available on a large set of faces, can accelerate research and development towards creating more fair and accurate facial recognition systems. Expand
...
1
2
3
4
5
...