• Corpus ID: 238531802

FairCal: Fairness Calibration for Face Verification

  title={FairCal: Fairness Calibration for Face Verification},
  author={Tiago Salvador and Stephanie Cairns and Vikram S. Voleti and Noah Marshall and Adam M. Oberman},
Despite being widely used, face recognition models suffer from bias: the probability of a false positive (incorrect face match) strongly depends on sensitive attributes such as the ethnicity of the face. As a result, these models can disproportionately and negatively impact minority groups, particularly when used by law enforcement. The majority of bias reduction methods have several drawbacks: they use an end-to-end retraining approach, may not be feasible due to privacy issues, and often… 


VGGFace2: A Dataset for Recognising Faces across Pose and Age
A new large-scale face dataset named VGGFace2 is introduced, which contains 3.31 million images of 9131 subjects, with an average of 362.6 images for each subject, and the automated and manual filtering stages to ensure a high accuracy for the images of each identity are described.
MS-Celeb-1M: A Dataset and Benchmark for Large-Scale Face Recognition
A benchmark task to recognize one million celebrities from their face images, by using all the possibly collected face images of this individual on the web as training data, which could lead to one of the largest classification problems in computer vision.
PASS: Protected Attribute Suppression System for Mitigating Bias in Face Recognition
This work presents a descriptor-based adversarial de-biasing approach called ‘Protected Attribute Suppression System (PASS)’, which shows the efficacy of PASS to reduce gender and skintone information in descriptors from SOTA face recognition networks like Arcface.
Calibration of Neural Networks using Splines
This work introduces a binning-free calibration measure inspired by the classical Kolmogorov-Smirnov (KS) statistical test in which the main idea is to compare the respective cumulative probability distributions.
An adversarial learning algorithm for mitigating gender bias in face recognition
This work presents a novel approach called `Adversarial Gender De-biasing (AGD)' to reduce the strength of gender information in face recognition features by introducing a bias reducing classification loss and shows that AGD significantly reduces bias, while achieving reasonable recognition performance.
Face Recognition: Too Bias, or Not Too Bias?
A human evaluation to measure bias in humans is done, which supports the hypothesis that an analogous bias exists in human perception.
Analyzing and Reducing the Damage of Dataset Bias to Face Recognition With Synthetic Data
This study demonstrates the large potential of synthetic data for analyzing and reducing the negative effects of dataset bias on deep face recognition systems and shows that current neural network architectures cannot disentangle face pose and facial identity, which limits their generalization ability.
On Calibration of Modern Neural Networks
It is discovered that modern neural networks, unlike those from a decade ago, are poorly calibrated, and on most datasets, temperature scaling -- a single-parameter variant of Platt Scaling -- is surprisingly effective at calibrating predictions.
Beta calibration: a well-founded and easily implemented improvement on logistic calibration for binary classifiers
Extensive experiments show that beta calibration is superior to logistic calibration for Naive Bayes and Adaboost and derive the method from first principles and show that fitting it is as easy as fitting a logistic curve.
Obtaining calibrated probability estimates from decision trees and naive Bayesian classifiers
It is concluded that binning succeeds in significantly improving naive Bayesian probability estimates, while for improving decision tree probability estimates the recommend smoothing by -estimation and a new variant of pruning that is called curtailment.