Towards Fairness in Visual Recognition: Effective Strategies for Bias Mitigation

@article{Wang2020TowardsFI,
  title={Towards Fairness in Visual Recognition: Effective Strategies for Bias Mitigation},
  author={Zeyu Wang and Klint Qinami and Yannis Karakozis and Kyle Genova and P. Nair and Kenji Hata and Olga Russakovsky},
  journal={2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
  year={2020},
  pages={8916-8925}
}
  • Zeyu Wang, Klint Qinami, +4 authors Olga Russakovsky
  • Published 2020
  • Computer Science
  • 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)
  • Computer vision models learn to perform a task by capturing relevant statistics from training data. It has been shown that models learn spurious age, gender, and race correlations when trained for seemingly unrelated tasks like activity recognition or image captioning. Various mitigation techniques have been presented to prevent models from utilizing or learning such biases. However, there has been little systematic comparison between these techniques. We design a simple but surprisingly… CONTINUE READING
    12 Citations

    Figures, Tables, and Topics from this paper

    Fair Attribute Classification through Latent Space De-biasing
    • PDF
    Image Representations Learned With Unsupervised Pre-Training Contain Human-like Biases
    • 1
    • PDF
    FairFace Challenge at ECCV 2020: Analyzing Bias in Face Recognition
    • 1
    • PDF
    Counterfactual Generative Networks
    • PDF
    DISCO: Dynamic and Invariant Sensitive Channel Obfuscation for deep neural networks
    • PDF
    Investigating Bias and Fairness in Facial Expression Recognition
    • 4
    • Highly Influenced
    • PDF
    REVISE: A Tool for Measuring and Mitigating Bias in Visual Datasets
    • 2
    • PDF
    ViBE: A Tool for Measuring and Mitigating Bias in Image Datasets
    • 1

    References

    SHOWING 1-10 OF 60 REFERENCES
    Balanced Datasets Are Not Enough: Estimating and Mitigating Gender Bias in Deep Image Representations
    • 51
    • PDF
    Undoing the Damage of Dataset Bias
    • 317
    • PDF
    Fair Generative Modeling via Weak Supervision
    • 10
    • PDF
    Mitigating Unwanted Biases with Adversarial Learning
    • 276
    • Highly Influential
    • PDF
    Racial Faces in the Wild: Reducing Racial Bias by Information Maximization Adaptation Network
    • 50
    • PDF
    Turning a Blind Eye: Explicit Removal of Biases and Variation from Deep Neural Network Embeddings
    • 69
    • Highly Influential
    • PDF
    Men Also Like Shopping: Reducing Gender Bias Amplification using Corpus-level Constraints
    • 293
    • Highly Influential
    • PDF
    Discovering Fair Representations in the Data Domain
    • 22
    • PDF
    Feature-Wise Bias Amplification
    • 7
    • PDF
    On Fairness and Calibration
    • 250
    • PDF