FairFace Challenge at ECCV 2020: Analyzing Bias in Face Recognition

  title={FairFace Challenge at ECCV 2020: Analyzing Bias in Face Recognition},
  author={Tom{\'a}s Sixta and Julio C. S. Jacques Junior and Pau Buch-Cardona and Eduard Vazquez and Sergio Escalera},
  booktitle={ECCV Workshops},
This work summarizes the 2020 ChaLearn Looking at People Fair Face Recognition and Analysis Challenge and provides a description of the top-winning solutions and analysis of the results. The aim of the challenge was to evaluate accuracy and bias in gender and skin colour of submitted algorithms on the task of 1:1 face verification in the presence of other confounding attributes. Participants were evaluated using an in-the-wild dataset based on reannotated IJB-C, further enriched by 12.5K new… 

Balancing Biases and Preserving Privacy on Balanced Faces in the Wild

This work mitigate the imbalanced performances using a novel domain adaptation learning scheme on the facial features extracted using state-of-the-art using a benefit of the proposed is to preserve identity information in facial features while removing demographic knowledge in the lower dimensional features.

Pre-training strategies and datasets for facial representation learning

A comprehensive evaluation benchmark for facial representation learning consisting of 5 important face analysis tasks and systematically investigates two ways of large-scale representation learning applied to faces: supervised and unsupervised pre-training.

Person Perception Biases Exposed: Revisiting the First Impressions Dataset

This work revisits the ChaLearn First Impressions database, annotated for personality perception using pairwise comparisons via crowdsourcing, and reveals existing person perception biases associated to perceived attributes like gender, ethnicity, age and face attractiveness.

Racial Bias within Face Recognition: A Survey

The overall aim is to provide comprehensive coverage of the racial bias problem with respect to each and every stage of the face recognition processing pipeline whilst also highlighting the potential pitfalls and limitations of contemporary mitigation strategies that need to be considered within future research endeavours or commercial applications alike.

Trustworthy AI: From Principles to Practices

This review provides AI practitioners with a comprehensive guide for building trustworthy AI systems and introduces the theoretical framework of important aspects of AI trustworthiness, including robustness, generalization, explainability, transparency, reproducibility, fairness, privacy preservation, and accountability.

Two-Face: Adversarial Audit of Commercial Face Recognition Systems

This work performs an extensive adversarial audit on multiple systems and datasets, making a number of concerning observations- there has been a drop in accuracy for some tasks on CelebSET dataset since a previous audit.

Skin Deep: Investigating Subjectivity in Skin Tone Annotations for Computer Vision Benchmark Datasets

This work surveyed recent skintone annotation procedures and conducted annotation experiments to examine how subjective understandings of skin tone are embedded in skin tone annotation procedures, and calls for greater reflexivity in the design, analysis, and documentation of procedures for evaluation using skin tone.

FLAC: Fairness-Aware Representation Learning by Suppressing Attribute-Class Associations

FLAC proposes a sampling strategy that highlights underrepresented samples in the dataset, and casts the problem of learning fair representations as a probability matching problem that leverages representations extracted by a bias-capturing classifier, and it is theoretically shown that FLAC can indeed lead to fair representations, that are independent of the protected attributes.

Fairness von Biometrischen Systemen

  • Jascha Kolberg
  • Computer Science
    Datenschutz und Datensicherheit - DuD
  • 2023
Systeme, die biometrische Technologien verwenden, sind in persönlichen, kommerziellen und staatlichen Identitätsmanagementanwendungen allgegenwärtig geworden. Sowohl kooperative (z.B.

The Box Size Confidence Bias Harms Your Object Detector

The proposed algorithm is used to analyze a diverse set of object detection architectures and shows that the conditional confidence bias harms their performance by up to 0.6 mAP and 0.8 mAP50.

The MegaFace Benchmark: 1 Million Faces for Recognition at Scale

The MegaFace dataset is assembled, both for identification and verification performance, and performance with respect to pose and a persons age is evaluated, as a function of training data size (#photos and #people).

IARPA Janus Benchmark - C: Face Dataset and Protocol

The IARPA Janus Benchmark–C (IJB-C) face dataset advances the goal of robust unconstrained face recognition, improving upon the previous public domain IJB-B dataset, by increasing dataset size and variability, and by introducing end-to-end protocols that more closely model operational face recognition use cases.

Labeled Faces in the Wild: A Survey

A review of the contributions to LFW for which the authors have provided results to the curators and the cross cutting topic of alignment and how it is used in various methods is reviewed.

Face Recognition Algorithm Bias: Performance Differences on Images of Children and Adults

This work identifies the best score-level fusion technique for the child demographic and shows a negative bias for each algorithm on children, further supporting the need for a deeper investigation into algorithm bias as a function of age cohorts.

Diversity in Faces

Diversity in Faces (DiF) provides a data set of one million annotated human face images for advancing the study of facial diversity, and believes that by making the extracted coding schemes available on a large set of faces, can accelerate research and development towards creating more fair and accurate facial recognition systems.

VGGFace2: A Dataset for Recognising Faces across Pose and Age

A new large-scale face dataset named VGGFace2 is introduced, which contains 3.31 million images of 9131 subjects, with an average of 362.6 images for each subject, and the automated and manual filtering stages to ensure a high accuracy for the images of each identity are described.

Exploring Racial Bias within Face Recognition via per-subject Adversarially-Enabled Data Augmentation

A novel adversarial derived data augmentation methodology that aims to enable dataset balance at a per-subject level via the use of image-to-image transformation for the transfer of sensitive racial characteristic facial features.

Analyzing and Reducing the Damage of Dataset Bias to Face Recognition With Synthetic Data

This study demonstrates the large potential of synthetic data for analyzing and reducing the negative effects of dataset bias on deep face recognition systems and shows that current neural network architectures cannot disentangle face pose and facial identity, which limits their generalization ability.

Accuracy Comparison Across Face Recognition Algorithms: Where Are We on Measuring Race Bias?

It is concluded that race bias needs to be measured for individual applications and a checklist for measuring this bias in face recognition algorithms is provided.