Adversarial Removal of Gender from Deep Image Representations

@article{Wang2018AdversarialRO,
  title={Adversarial Removal of Gender from Deep Image Representations},
  author={Tianlu Wang and Jieyu Zhao and Mark Yatskar and Kai-Wei Chang and Vicente Ordonez},
  journal={ArXiv},
  year={2018},
  volume={abs/1811.08489}
}
In this work we analyze visual recognition tasks such as object and action recognition, and demonstrate the extent to which these tasks are correlated with features corresponding to a protected variable such as gender. We introduce the concept of natural leakage to measure the intrinsic reliance of a task on a protected variable. We further show that machine learning models of visual recognition trained for these tasks tend to exacerbate the reliance on gender features. To address this, we use… CONTINUE READING

Citations

Publications citing this paper.

Don't Take the Easy Way Out: Ensemble Based Methods for Avoiding Known Dataset Biases

Christopher Clark, Mark Yatskar, Luke Zettlemoyer
  • IJCNLP 2019
  • 2019
VIEW 1 EXCERPT
CITES BACKGROUND

References

Publications referenced by this paper.
SHOWING 1-10 OF 32 REFERENCES