UnQovering Stereotyping Biases via Underspecified Questions

  title={UnQovering Stereotyping Biases via Underspecified Questions},
  author={Tao Li and Daniel Khashabi and Tushar Khot and Ashish Sabharwal and Vivek Srikumar},
While language embeddings have been shown to have stereotyping biases, how these biases affect downstream question answering (QA) models remains unexplored. We present UNQOVER, a general framework to probe and quantify biases through underspecified questions. We show that a naive use of model scores can lead to incorrect bias estimates due to two forms of reasoning errors: positional dependence and question independence. We design a formalism that isolates the aforementioned errors. As case… Expand
2 Citations
Lawyers are Dishonest? Quantifying Representational Harms in Commonsense Knowledge Resources
  • PDF
What Will it Take to Fix Benchmarking in Natural Language Understanding?
  • 1
  • PDF


StereoSet: Measuring stereotypical bias in pretrained language models
  • 32
  • Highly Influential
  • PDF
Assessing Social and Intersectional Biases in Contextualized Word Representations
  • 33
  • PDF
Quantifying Social Biases in Contextual Word Representations
  • 10
OSCaR: Orthogonal Subspace Correction and Rectification of Biases in Word Embeddings
  • 3
  • PDF
On Measuring and Mitigating Biased Inferences of Word Embeddings
  • 13
  • PDF
Hurtful words: quantifying biases in clinical contextual word embeddings
  • 17
  • PDF
Identifying and Reducing Gender Bias in Word-Level Language Models
  • 40
  • PDF
Gender Bias in Contextualized Word Embeddings
  • 110
  • PDF
Man is to Computer Programmer as Woman is to Homemaker? Debiasing Word Embeddings
  • 1,020
  • PDF
The Woman Worked as a Babysitter: On Biases in Language Generation
  • 49
  • PDF