Share This Author
Hypothesis Only Baselines in Natural Language Inference
- Adam Poliak, Jason Naradowsky, Aparajita Haldar, Rachel Rudinger, Benjamin Van Durme
- Computer Science*SEMEVAL
- 2 May 2018
This approach, which is referred to as a hypothesis-only model, is able to significantly outperform a majority-class baseline across a number of NLI datasets and suggests that statistical irregularities may allow a model to perform NLI in some datasets beyond what should be achievable without access to the context.
Gender Bias in Coreference Resolution
- Rachel Rudinger, Jason Naradowsky, Brian Leonard, Benjamin Van Durme
- Computer ScienceNAACL
- 25 April 2018
A novel, Winograd schema-style set of minimal pair sentences that differ only by pronoun gender are introduced, and systematic gender bias in three publicly-available coreference resolution systems is evaluated and confirmed.
On Measuring Social Biases in Sentence Encoders
- Chandler May, Alex Wang, Shikha Bordia, Samuel R. Bowman, Rachel Rudinger
- Computer ScienceNAACL
- 25 March 2019
The Word Embedding Association Test is extended to measure bias in sentence encoders and mixed results including suspicious patterns of sensitivity that suggest the test’s assumptions may not hold in general.
Collecting Diverse Natural Language Inference Problems for Sentence Representation Evaluation
We present a large-scale collection of diverse natural language inference (NLI) datasets that help provide insight into how well a sentence representation captures distinct types of reasoning. The…
Ordinal Common-sense Inference
This work describes a framework for extracting common-sense knowledge from corpora, which is then used to construct a dataset for this ordinal entailment task, and annotates subsets of previously established datasets via the ordinal annotation protocol in order to analyze the distinctions between these and what is constructed.
Script Induction as Language Modeling
- Rachel Rudinger, Pushpendre Rastogi, Francis Ferraro, Benjamin Van Durme
- Computer ScienceEMNLP
- 1 September 2015
It is argued that the narrative cloze can be productively reframed as a language modeling task and by training a discriminative language model for this task, improvements of up to 27 percent over prior methods on standard narrative clozes metrics are attained.
Universal Decompositional Semantics on Universal Dependencies
A framework for augmenting data sets from the Universal Dependencies project with Universal Decompositional Semantics, and describes results from annotating the English Universal Dependency treebank, dealing with word senses, semantic roles, and event properties.
- D. Reisinger, Rachel Rudinger, Francis Ferraro, Craig Harman, Kyle Rawlins, Benjamin Van Durme
- Computer ScienceTACL
- 11 August 2015
We present the first large-scale, corpus based verification of Dowty’s seminal theory of proto-roles. Our results demonstrate both the need for and the feasibility of a property-based annotation…
Social Bias in Elicited Natural Language Inferences
- Rachel Rudinger, Chandler May, Benjamin Van Durme
- Psychology, Computer ScienceEthNLP@EACL
- 1 April 2017
The SNLI human-elicitation protocol makes it prone to amplifying bias and stereotypical associations, which is demonstrated statistically and with qualitative examples.
Neural Models of Factuality
A substantial expansion of the It Happened portion of the Universal Decompositional Semantics dataset is presented, yielding the largest event factuality dataset to date.