• Publications
  • Influence
On Measuring Social Biases in Sentence Encoders
TLDR
The Word Embedding Association Test shows that GloVe and word2vec word embeddings exhibit human-like implicit biases based on gender, race, and other social constructs. Expand
  • 103
  • 13
  • PDF
Do Attention Heads in BERT Track Syntactic Dependencies?
TLDR
We investigate the extent to which individual attention heads in pretrained transformer language models, such as BERT and RoBERTa, implicitly capture syntactic dependency relations, and compare them to the ground-truth Universal Dependency trees. Expand
  • 40
  • 9
  • PDF
Investigating BERT’s Knowledge of Language: Five Analysis Methods with NPIs
TLDR
We use a single linguistic phenomenon, negative polarity item (NPI) licensing, as a case study for our experiments. Expand
  • 42
  • 2
  • PDF
Identifying and Reducing Gender Bias in Word-Level Language Models
TLDR
We propose a metric to measure gender bias in a text corpus and the text generated from a recurrent neural network language model trained on the text corpus; (iii) propose a regularization loss term for the language model that minimizes the projection of encoder-trained embeddings onto an embedding subspace that encodes gender. Expand
  • 44
  • 1
  • PDF
HoVer: A Dataset for Many-Hop Fact Extraction And Claim Verification
TLDR
We introduce HoVer (HOppy VERification), a dataset for many-hop evidence extraction and fact verification in the NASCAR Sprint Cup Series. Expand
  • 3
  • 1
  • PDF