Exploring BERT's Sensitivity to Lexical Cues using Tests from Semantic Priming

  title={Exploring BERT's Sensitivity to Lexical Cues using Tests from Semantic Priming},
  author={Kanishka Misra and A. Ettinger and Julia Taylor Rayz},
  • Kanishka Misra, A. Ettinger, Julia Taylor Rayz
  • Published in EMNLP 2020
  • Computer Science
  • Models trained to estimate word probabilities in context have become ubiquitous in natural language processing. How do these models use lexical cues in context to inform their word probabilities? To answer this question, we present a case study analyzing the pre-trained BERT model with tests informed by semantic priming. Using English lexical stimuli that show priming in humans, we find that BERT too shows "priming," predicting a word with greater probability when the context includes a related… CONTINUE READING
    1 Citations

    Figures and Tables from this paper


    Semantic Priming: Perspectives from Memory and Word Recognition
    • 471
    • PDF
    The semantic priming project
    • 82
    • Highly Influential
    • PDF
    What BERT Is Not: Lessons from a New Suite of Psycholinguistic Diagnostics for Language Models
    • A. Ettinger
    • Computer Science
    • Transactions of the Association for Computational Linguistics
    • 2019
    • 83
    • PDF
    The effect of word predictability on reading time is logarithmic
    • 316
    • PDF
    The influence of contextual constraints on recall for words within sentences.
    • 5
    A Probabilistic Earley Parser as a Psycholinguistic Model
    • 793
    • PDF
    Modeling garden path effects without explicit hierarchical syntax
    • 26
    • PDF
    Using Priming to Uncover the Organization of Syntactic Representations in Neural Language Models
    • 13
    • Highly Influential
    • PDF
    Evaluation of word embeddings against cognitive processes: primed reaction times in lexical decision and naming tasks
    • 13
    • PDF
    Chapter 2 Contextual Constraint and Lexical Processing
    • 23