Balanced Datasets Are Not Enough: Estimating and Mitigating Gender Bias in Deep Image Representations
@article{Wang2018BalancedDA, title={Balanced Datasets Are Not Enough: Estimating and Mitigating Gender Bias in Deep Image Representations}, author={Tianlu Wang and Jieyu Zhao and Mark Yatskar and Kai-Wei Chang and Vicente Ordonez}, journal={2019 IEEE/CVF International Conference on Computer Vision (ICCV)}, year={2018}, pages={5309-5318} }
In this work, we present a framework to measure and mitigate intrinsic biases with respect to protected variables --such as gender-- in visual recognition tasks. [] Key Method To mitigate this, we adopt an adversarial approach to remove unwanted features corresponding to protected variables from intermediate representations in a deep neural network -- and provide a detailed analysis of its effectiveness. Experiments on two datasets: the COCO dataset (objects), and the imSitu dataset (actions), show…
Figures and Tables from this paper
247 Citations
Matched sample selection with GANs for mitigating attribute confounding
- Computer ScienceArXiv
- 2021
This work proposes a matching approach that selects a subset of images from the full dataset with balanced attribute distributions across protected attributes, and demonstrates the work in the context of gender bias in multiple open-source facial-recognition classifiers and finds that bias persists after removing key confounders via matching.
Towards Fairness in Visual Recognition: Effective Strategies for Bias Mitigation
- Computer Science2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)
- 2020
A simple but surprisingly effective visual recognition benchmark for studying bias mitigation, and a simple but similarly effective alternative to the inference-time Reducing Bias Amplification method of Zhao et al., and design a domain-independent training technique that outperforms all other methods.
Mitigating Gender Bias in Face Recognition using the von Mises-Fisher Mixture Model
- Computer ScienceICML
- 2022
This work investigates the gender bias of deep Face Recognition networks through a new post-processing methodology which transforms the deep embeddings of a pre-trained model to give more representation power to discriminated subgroups, and empirically observes that these hyperparameters are correlated with fairness metrics.
Gender Artifacts in Visual Datasets
- Computer ScienceArXiv
- 2022
It is claimed that attempts to remove gender artifacts from large-scale visual datasets are largely infeasible and the responsibility lies with researchers and practitioners to be aware that the distribution of images within datasets is highly gendered and hence develop methods which are robust to these distributional shifts across groups.
Fair Attribute Classification through Latent Space De-biasing
- Computer Science2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)
- 2021
This work uses GANs to generate realistic-looking images, and perturb these images in the underlying latent space to generate training data that is balanced for each protected attribute, and empirically demonstrates that target classifiers trained on the augmented dataset exhibit a number of both quantitative and qualitative benefits.
Information-Theoretic Bias Assessment Of Learned Representations Of Pretrained Face Recognition
- Computer Science2021 16th IEEE International Conference on Automatic Face and Gesture Recognition (FG 2021)
- 2021
This work proposes an information-theoretic, independent bias assessment metric to identify degree of bias against protected demographic attributes from learned representations of pretrained facial recognition systems and establishes a benchmark metric.
Balancing Biases and Preserving Privacy on Balanced Faces in the Wild
- Computer ScienceArXiv
- 2021
This work mitigate the imbalanced performances using a novel domain adaptation learning scheme on the facial features extracted using state-of-the-art using a benefit of the proposed is to preserve identity information in facial features while removing demographic knowledge in the lower dimensional features.
Through a fair looking-glass: mitigating bias in image datasets
- Computer ScienceArXiv
- 2022
This study presents a fast and effective model to de-bias an image dataset through reconstruction and minimizing the statistical dependence between intended variables, and achieves a promising fairness-accuracy combination.
Feature and Label Embedding Spaces Matter in Addressing Image Classifier Bias
- Computer ScienceBMVC
- 2021
Evaluated on biased image datasets, for multi-class, multi-label and binary classifications, shows the effectiveness of tackling both feature and label embedding spaces in improving the fairness of the classi fier predictions, while preserving classi-cation performance.
Unravelling the Effect of Image Distortions for Biased Prediction of Pre-trained Face Recognition Models
- Computer Science2021 IEEE/CVF International Conference on Computer Vision Workshops (ICCVW)
- 2021
A systematic analysis to evaluate the performance of four state-of-the-art deep face recognition models in the presence of image distortions across different gender and race subgroups finds that image distortions have a relationship with the performance gap of the model across different subgroups.
References
SHOWING 1-10 OF 49 REFERENCES
Adversarial Removal of Demographic Attributes from Text Data
- Computer ScienceEMNLP
- 2018
It is shown that demographic information of authors is encoded in—and can be recovered from—the intermediate representations learned by text-based neural classifiers, and the implication is that decisions of classifiers trained on textual data are not agnostic to—and likely condition on—demographic attributes.
ConvNets and ImageNet Beyond Accuracy: Explanations, Bias Detection, Adversarial Examples and Model Criticism
- Computer ScienceArXiv
- 2017
It is shown that explanations can mitigate the impact of misclassified adversarial examples from the perspective of the end-user and a novel tool for uncovering the undesirable biases learned by a model is introduced.
Men Also Like Shopping: Reducing Gender Bias Amplification using Corpus-level Constraints
- Computer ScienceEMNLP
- 2017
This work proposes to inject corpus-level constraints for calibrating existing structured prediction models and design an algorithm based on Lagrangian relaxation for collective inference to reduce the magnitude of bias amplification in multilabel object classification and visual semantic role labeling.
Improving Smiling Detection with Race and Gender Diversity
- Computer ScienceArXiv
- 2017
This research demonstrates the utility of modeling race and gender to improve a face attribute detection task, using a twofold transfer learning framework that allows for privacy towards individuals in a target dataset.
Undoing the Damage of Dataset Bias
- Computer ScienceECCV
- 2012
Overall, this work finds that it is beneficial to explicitly account for bias when combining multiple datasets, and proposes a discriminative framework that directly exploits dataset bias during training.
Mitigating Unwanted Biases with Adversarial Learning
- Computer ScienceAIES
- 2018
This work presents a framework for mitigating biases concerning demographic groups by including a variable for the group of interest and simultaneously learning a predictor and an adversary, which results in accurate predictions that exhibit less evidence of stereotyping Z.
Women also Snowboard: Overcoming Bias in Captioning Models
- Computer ScienceECCV
- 2018
A new Equalizer model is introduced that ensures equal gender probability when gender Evidence is occluded in a scene and confident predictions when gender evidence is present and has lower error than prior work when describing images with people and mentioning their gender and more closely matches the ground truth ratio of sentences including women to sentences including men.
Towards Privacy-Preserving Visual Recognition via Adversarial Training: A Pilot Study
- Computer ScienceECCV
- 2018
This paper aims to improve privacy-preserving visual recognition, an increasingly demanded feature in smart camera applications, by formulating a unique adversarial training framework. The proposed…
Controllable Invariance through Adversarial Feature Learning
- Computer ScienceNIPS
- 2017
This paper shows that the proposed framework induces an invariant representation, and leads to better generalization evidenced by the improved performance on three benchmark tasks.
Exploring Disentangled Feature Representation Beyond Face Identification
- Computer Science2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition
- 2018
Comprehensive evaluations demonstrate that the proposed features not only preserve state-of-the-art identity verification performance on LFW, but also acquire comparable discriminative power for face attribute recognition on CelebA and LFWA.