• Publications
  • Influence
Deep Contextualized Word Representations
TLDR
A new type of deep contextualized word representation is introduced that models both complex characteristics of word use and how these uses vary across linguistic contexts, allowing downstream models to mix different types of semi-supervision signals.
QuAC: Question Answering in Context
TLDR
QuAC introduces challenges not found in existing machine comprehension datasets: its questions are often more open-ended, unanswerable, or only meaningful within the dialog context, as it shows in a detailed qualitative evaluation.
Deep Unordered Composition Rivals Syntactic Methods for Text Classification
TLDR
This work presents a simple deep neural network that competes with and, in some cases, outperforms such models on sentiment analysis and factoid question answering tasks while taking only a fraction of the training time.
Ask Me Anything: Dynamic Memory Networks for Natural Language Processing
TLDR
The dynamic memory network (DMN), a neural network architecture which processes input sequences and questions, forms episodic memories, and generates relevant answers, is introduced.
Adversarial Example Generation with Syntactically Controlled Paraphrase Networks
TLDR
A combination of automated and human evaluations show that SCPNs generate paraphrases that follow their target specifications without decreasing paraphrase quality when compared to baseline (uncontrolled) paraphrase systems.
Search-based Neural Structured Learning for Sequential Question Answering
TLDR
This work proposes a novel dynamic neural semantic parsing framework trained using a weakly supervised reward-guided search that effectively leverages the sequential context to outperform state-of-the-art QA systems that are designed to answer highly complex questions.
Pathologies of Neural Models Make Interpretations Difficult
TLDR
This work uses input reduction, which iteratively removes the least important word from the input, to expose pathological behaviors of neural models: the remaining words appear nonsensical to humans and are not the ones determined as important by interpretation methods.
Unsupervised Latent Tree Induction with Deep Inside-Outside Recursive Auto-Encoders
TLDR
DIORA is introduced, a fully-unsupervised method for discovering syntax that simultaneously learns representations for constituents within the induced tree that outperforms previously reported results for unsupervised binary constituency parsing on the benchmark WSJ dataset.
A Neural Network for Factoid Question Answering over Paragraphs
TLDR
This work introduces a recursive neural network model, qanta, that can reason over question text input by modeling textual compositionality and applies it to a dataset of questions from a trivia competition called quiz bowl.
Political Ideology Detection Using Recursive Neural Networks
TLDR
A RNN framework is applied to the task of identifying the political position evinced by a sentence to show the importance of modeling subsentential elements and outperforms existing models on a newly annotated dataset and an existing dataset.
...
1
2
3
4
5
...