Corpus ID: 222141056

UnQovering Stereotyping Biases via Underspecified Questions

  title={UnQovering Stereotyping Biases via Underspecified Questions},
  author={Tao Li and Daniel Khashabi and Tushar Khot and Ashish Sabharwal and V. Srikumar},
  • Tao Li, Daniel Khashabi, +2 authors V. Srikumar
  • Published 2020
  • Computer Science
  • ArXiv
  • Warning: This paper contains examples of stereotypes that are potentially offensive. While language embeddings have been shown to have stereotyping biases, how these biases affect downstream question answering (QA) models remains unexplored. We present UNQOVER, a general framework to probe and quantify biases through underspecified questions. We show that a naı̈ve use of model scores can lead to incorrect bias estimates due to two forms of reasoning errors: positional dependence and question… CONTINUE READING


    Publications referenced by this paper.
    BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
    • 10,032
    • PDF
    Glove: Global Vectors for Word Representation
    • 14,622
    • PDF
    Deep contextualized word representations
    • 4,147
    • PDF
    SQuAD: 100, 000+ Questions for Machine Comprehension of Text
    • 2,154
    • PDF
    Man is to Computer Programmer as Woman is to Homemaker? Debiasing Word Embeddings
    • 798
    • PDF
    RoBERTa: A Robustly Optimized BERT Pretraining Approach
    • 1,417
    • PDF
    HuggingFace's Transformers: State-of-the-art Natural Language Processing
    • 718
    • PDF
    Linguistic Models for Analyzing and Detecting Biased Language
    • 178
    • PDF
    NewsQA: A Machine Comprehension Dataset
    • 317
    • PDF
    Word embeddings quantify 100 years of gender and ethnic stereotypes
    • 210
    • PDF