• Publications
  • Influence
Annotation Artifacts in Natural Language Inference Data
TLDR
Large-scale datasets for natural language inference are created by presenting crowd workers with a sentence (premise), and asking them to generate three new sentences (hypotheses) that it entails, contradicts, or is logically neutral with respect to. Expand
  • 367
  • 71
  • PDF
DyNet: The Dynamic Neural Network Toolkit
TLDR
We describe DyNet, a toolkit for implementing neural network models based on dynamic declaration of network structure. Expand
  • 325
  • 27
  • PDF
Don't Stop Pretraining: Adapt Language Models to Domains and Tasks
TLDR
We present a study across four domains (biomedical and computer science publications, news, and reviews) and eight classification tasks, showing that a second phase of pretraining in-domain (domain-adaptive pretraining) leads to performance gains, under both high- and low-resource settings. Expand
  • 135
  • 27
  • PDF
A Dependency Parser for Tweets
TLDR
We describe TWEEBOPARSER, a dependency parser for English tweets that achieves over 80% unlabeled attachment score on a new, high-quality test set. Expand
  • 195
  • 26
  • PDF
Frame-Semantic Parsing with Softmax-Margin Segmental RNNs and a Syntactic Scaffold
TLDR
We present a new, efficient frame-semantic parser that labels semantic arguments to FrameNet predicates. Expand
  • 59
  • 10
  • PDF
Syntactic Scaffolds for Semantic Structures
TLDR
We introduce the syntactic scaffold, an approach to incorporating syntactic information into semantic tasks. Expand
  • 55
  • 9
  • PDF
Adversarial Filters of Dataset Biases
TLDR
We investigate one recently proposed approach, AFLite, which adversarially filters such dataset biases, as a means to mitigate the prevalent overestimation of machine performance. Expand
  • 29
  • 6
  • PDF
Greedy, Joint Syntactic-Semantic Parsing with Stack LSTMs
TLDR
We present a transition-based parser that jointly produces syntactic and semantic dependencies. Expand
  • 55
  • 5
  • PDF
Learning Joint Semantic Parsers from Disjoint Data
TLDR
We present a new approach to learning semantic parsers from multiple datasets, even when the target semantic formalisms are drastically different, and the underlying corpora do not overlap. Expand
  • 38
  • 5
  • PDF
Transfer Learning in Natural Language Processing
TLDR
We present an overview of modern transfer learning methods in NLP, how models are pre-trained, what information the representations they learn capture, and review examples and case studies on how these models can be integrated and adapted in downstream NLP tasks. Expand
  • 76
  • 4