• Computer Science
  • Published in ArXiv 2019

Towards Fairness in Visual Recognition: Effective Strategies for Bias Mitigation

@article{Wang2019TowardsFI,
  title={Towards Fairness in Visual Recognition: Effective Strategies for Bias Mitigation},
  author={Zeyu Wang and Klint Qinami and Yannis Karakozis and Kyle Genova and Prem Nair and Kenji Hata and Olga Russakovsky},
  journal={ArXiv},
  year={2019},
  volume={abs/1911.11834}
}
Computer vision models learn to perform a task by capturing relevant statistics from training data. It has been shown that models learn spurious age, gender, and race correlations when trained for seemingly unrelated tasks like activity recognition or image captioning. Various mitigation techniques have been presented to prevent models from utilizing or learning such biases. However, there has been little systematic comparison between these techniques. We design a simple but surprisingly… CONTINUE READING

Figures, Tables, and Topics from this paper.

References

Publications referenced by this paper.
SHOWING 1-10 OF 47 REFERENCES

Deep Residual Learning for Image Recognition

VIEW 6 EXCERPTS
HIGHLY INFLUENTIAL

Deep Learning Face Attributes in the Wild

VIEW 8 EXCERPTS
HIGHLY INFLUENTIAL

Mitigating Unwanted Biases with Adversarial Learning

VIEW 8 EXCERPTS
HIGHLY INFLUENTIAL

Fairness through awareness

VIEW 5 EXCERPTS
HIGHLY INFLUENTIAL

CINIC-10 is not ImageNet or CIFAR-10

VIEW 5 EXCERPTS
HIGHLY INFLUENTIAL

ImageNet Large Scale Visual Recognition Challenge

VIEW 4 EXCERPTS