• Publications
  • Influence
Annotation Artifacts in Natural Language Inference Data
TLDR
It is shown that a simple text categorization model can correctly classify the hypothesis alone in about 67% of SNLI and 53% of MultiNLI, and that specific linguistic phenomena such as negation and vagueness are highly correlated with certain inference classes. Expand
Don’t Stop Pretraining: Adapt Language Models to Domains and Tasks
TLDR
It is consistently found that multi-phase adaptive pretraining offers large gains in task performance, and it is shown that adapting to a task corpus augmented using simple data selection strategies is an effective alternative, especially when resources for domain-adaptive pretraining might be unavailable. Expand
DyNet: The Dynamic Neural Network Toolkit
TLDR
DyNet is a toolkit for implementing neural network models based on dynamic declaration of network structure that has an optimized C++ backend and lightweight graph representation and is designed to allow users to implement their models in a way that is idiomatic in their preferred programming language. Expand
A Dependency Parser for Tweets
TLDR
A new dependency parser for English tweets, TWEEBOPARSER, which builds on several contributions: new syntactic annotations for a corpus of tweets, with conventions informed by the domain; adaptations to a statistical parsing algorithm; and a new approach to exploiting out-of-domain Penn Treebank data. Expand
Frame-Semantic Parsing with Softmax-Margin Segmental RNNs and a Syntactic Scaffold
TLDR
A new, efficient frame-semantic parser that labels semantic arguments to FrameNet predicates, built using an extension to the segmental RNN that emphasizes recall, achieves competitive performance without any calls to a syntactic parser. Expand
Syntactic Scaffolds for Semantic Structures
TLDR
This work introduces the syntactic scaffold, an approach to incorporating syntactic information into semantic tasks through a multitask objective, and improves over strong baselines on PropBank semantics, frame semantics, and coreference resolution. Expand
Adversarial Filters of Dataset Biases
TLDR
This work presents extensive supporting evidence that AFLite is broadly applicable for reduction of measurable dataset biases, and that models trained on the filtered datasets yield better generalization to out-of-distribution tasks. Expand
Transfer Learning in Natural Language Processing
TLDR
An overview of modern transfer learning methods in NLP, how models are pre-trained, what information the representations they learn capture, and review examples and case studies on how these models can be integrated and adapted in downstream NLP tasks are presented. Expand
The Right Tool for the Job: Matching Model and Instance Complexities
TLDR
This work proposes a modification to contextual representation fine-tuning which allows for an early (and fast) “exit” from neural network calculations for simple instances, and late (and accurate) exit for hard instances during inference. Expand
Greedy, Joint Syntactic-Semantic Parsing with Stack LSTMs
TLDR
This work presents a transition-based parser that jointly produces syntactic and semantic dependencies and obtains the best published parsing performance among models that jointly learn syntax and semantics. Expand
...
1
2
3
4
...