UnQovering Stereotyping Biases via Underspecified Questions
@inproceedings{Li2020UnQoveringSB, title={UnQovering Stereotyping Biases via Underspecified Questions}, author={Tao Li and Daniel Khashabi and Tushar Khot and Ashish Sabharwal and Vivek Srikumar}, booktitle={EMNLP}, year={2020} }
While language embeddings have been shown to have stereotyping biases, how these biases affect downstream question answering (QA) models remains unexplored. We present UNQOVER, a general framework to probe and quantify biases through underspecified questions. We show that a naive use of model scores can lead to incorrect bias estimates due to two forms of reasoning errors: positional dependence and question independence. We design a formalism that isolates the aforementioned errors. As case… Expand
Figures and Tables from this paper
2 Citations
Lawyers are Dishonest? Quantifying Representational Harms in Commonsense Knowledge Resources
- Computer Science
- ArXiv
- 2021
- PDF
What Will it Take to Fix Benchmarking in Natural Language Understanding?
- Computer Science
- ArXiv
- 2021
- 1
- PDF
References
SHOWING 1-10 OF 36 REFERENCES
StereoSet: Measuring stereotypical bias in pretrained language models
- Computer Science
- ArXiv
- 2020
- 32
- Highly Influential
- PDF
Assessing Social and Intersectional Biases in Contextualized Word Representations
- Computer Science, Mathematics
- NeurIPS
- 2019
- 33
- PDF
OSCaR: Orthogonal Subspace Correction and Rectification of Biases in Word Embeddings
- Computer Science
- ArXiv
- 2020
- 3
- PDF
Hurtful words: quantifying biases in clinical contextual word embeddings
- Computer Science, Mathematics
- CHIL
- 2020
- 17
- PDF
Identifying and Reducing Gender Bias in Word-Level Language Models
- Computer Science
- NAACL-HLT
- 2019
- 40
- PDF
Man is to Computer Programmer as Woman is to Homemaker? Debiasing Word Embeddings
- Computer Science, Mathematics
- NIPS
- 2016
- 1,020
- PDF
The Woman Worked as a Babysitter: On Biases in Language Generation
- Computer Science
- EMNLP/IJCNLP
- 2019
- 49
- PDF