Balanced Datasets Are Not Enough: Estimating and Mitigating Gender Bias in Deep Image Representations

@article{Wang2019BalancedDA,
  title={Balanced Datasets Are Not Enough: Estimating and Mitigating Gender Bias in Deep Image Representations},
  author={Tianlu Wang and Jieyu Zhao and Mark Yatskar and Kai-Wei Chang and Vicente Ordonez},
  journal={2019 IEEE/CVF International Conference on Computer Vision (ICCV)},
  year={2019},
  pages={5309-5318}
}
In this work, we present a framework to measure and mitigate intrinsic biases with respect to protected variables --such as gender-- in visual recognition tasks. [...] Key Method To mitigate this, we adopt an adversarial approach to remove unwanted features corresponding to protected variables from intermediate representations in a deep neural network -- and provide a detailed analysis of its effectiveness. Experiments on two datasets: the COCO dataset (objects), and the imSitu dataset (actions), show…Expand
Balancing Biases and Preserving Privacy on Balanced Faces in the Wild
TLDR
There are demographic biases in the SOTA CNN used for FR that are mitigate using a novel domain adaptation learning scheme on the facial encodings extracted using SOTA deep nets, to preserve identity information in facial features while removing demographic knowledge in the lower dimensional features. Expand
Matched sample selection with GANs for mitigating attribute confounding
TLDR
This work proposes a matching approach that selects a subset of images from the full dataset with balanced attribute distributions across protected attributes, and demonstrates the work in the context of gender bias in multiple open-source facial-recognition classifiers and finds that bias persists after removing key confounders via matching. Expand
Towards Fairness in Visual Recognition: Effective Strategies for Bias Mitigation
TLDR
A simple but surprisingly effective visual recognition benchmark for studying bias mitigation, and a simple but similarly effective alternative to the inference-time Reducing Bias Amplification method of Zhao et al., and design a domain-independent training technique that outperforms all other methods. Expand
Information-Theoretic Bias Assessment Of Learned Representations Of Pretrained Face Recognition
TLDR
This work proposes an information-theoretic, independent bias assessment metric to identify degree of bias against protected demographic attributes from learned representations of pretrained facial recognition systems and establishes a benchmark metric. Expand
An adversarial learning algorithm for mitigating gender bias in face recognition
TLDR
This work presents a novel approach called `Adversarial Gender De-biasing (AGD)' to reduce the strength of gender information in face recognition features by introducing a bias reducing classification loss and shows that AGD significantly reduces bias, while achieving reasonable recognition performance. Expand
Unravelling the Effect of Image Distortions for Biased Prediction of Pre-trained Face Recognition Models
TLDR
A systematic analysis to evaluate the performance of four state-of-the-art deep face recognition models in the presence of image distortions across different gender and race subgroups finds that image distortions have a relationship with the performance gap of the model across different subgroups. Expand
Contrastive Learning for Fair Representations
TLDR
This paper proposes a method for mitigating bias in classifier training by incorporating contrastive learning, in which instances sharing the same class label are encouraged to have similar representations, while instances sharing a protected attribute are forced further apart. Expand
Exposing and Correcting the Gender Bias in Image Captioning Datasets and Models
TLDR
This work investigates gender bias in the COCO captioning dataset and shows that it engenders not only from the statistical distribution of genders with contexts but also from the flawed annotation by the human annotators. Expand
Understanding and Mitigating Annotation Bias in Facial Expression Recognition
TLDR
An AU-Calibrated Facial Expression Recognition (AUCFER) framework that utilizes facial action units (AUs) and incorporates the triplet loss into the objective function is proposed and experimental results suggest that the proposed method is more effective in removing expression annotation bias than existing techniques. Expand
Deep fair models for complex data: Graphs labeling and explainable face recognition
TLDR
This work measures fairness according to Demographic Parity, requiring the probability of the model decisions to be independent of the sensitive information, and investigates how to impose this constraint in the different layers of deep neural networks for complex data, with particular reference to deep networks for graph and face recognition. Expand
...
1
2
3
4
5
...

References

SHOWING 1-10 OF 50 REFERENCES
Adversarial Removal of Demographic Attributes from Text Data
TLDR
It is shown that demographic information of authors is encoded in—and can be recovered from—the intermediate representations learned by text-based neural classifiers, and the implication is that decisions of classifiers trained on textual data are not agnostic to—and likely condition on—demographic attributes. Expand
ConvNets and ImageNet Beyond Accuracy: Explanations, Bias Detection, Adversarial Examples and Model Criticism
TLDR
It is shown that explanations can mitigate the impact of misclassified adversarial examples from the perspective of the end-user and a novel tool for uncovering the undesirable biases learned by a model is introduced. Expand
Men Also Like Shopping: Reducing Gender Bias Amplification using Corpus-level Constraints
TLDR
This work proposes to inject corpus-level constraints for calibrating existing structured prediction models and design an algorithm based on Lagrangian relaxation for collective inference to reduce the magnitude of bias amplification in multilabel object classification and visual semantic role labeling. Expand
Improving Smiling Detection with Race and Gender Diversity
TLDR
This research demonstrates the utility of modeling race and gender to improve a face attribute detection task, using a twofold transfer learning framework that allows for privacy towards individuals in a target dataset. Expand
Undoing the Damage of Dataset Bias
TLDR
Overall, this work finds that it is beneficial to explicitly account for bias when combining multiple datasets, and proposes a discriminative framework that directly exploits dataset bias during training. Expand
Mitigating Unwanted Biases with Adversarial Learning
TLDR
This work presents a framework for mitigating biases concerning demographic groups by including a variable for the group of interest and simultaneously learning a predictor and an adversary, which results in accurate predictions that exhibit less evidence of stereotyping Z. Expand
Women also Snowboard: Overcoming Bias in Captioning Models
TLDR
A new Equalizer model is introduced that ensures equal gender probability when gender Evidence is occluded in a scene and confident predictions when gender evidence is present and has lower error than prior work when describing images with people and mentioning their gender and more closely matches the ground truth ratio of sentences including women to sentences including men. Expand
Towards Privacy-Preserving Visual Recognition via Adversarial Training: A Pilot Study
This paper aims to improve privacy-preserving visual recognition, an increasingly demanded feature in smart camera applications, by formulating a unique adversarial training framework. The proposedExpand
Controllable Invariance through Adversarial Feature Learning
TLDR
This paper shows that the proposed framework induces an invariant representation, and leads to better generalization evidenced by the improved performance on three benchmark tasks. Expand
Training with the Invisibles: Obfuscating Images to Share Safely for Learning Visual Recognition Models
TLDR
This work proposes to obfuscate the images so that humans are not able to recognize their detailed contents, while machines can still utilize them to train new models, to promote sharing visual data for learning a recognition model. Expand
...
1
2
3
4
5
...