Author pages are created from data sourced from our academic publisher partnerships and public sources.
Share This Author
Deep Contextualized Word Representations
A new type of deep contextualized word representation is introduced that models both complex characteristics of word use and how these uses vary across linguistic contexts, allowing downstream models to mix different types of semi-supervision signals.
QuAC: Question Answering in Context
QuAC introduces challenges not found in existing machine comprehension datasets: its questions are often more open-ended, unanswerable, or only meaningful within the dialog context, as it shows in a detailed qualitative evaluation.
Deep Unordered Composition Rivals Syntactic Methods for Text Classification
This work presents a simple deep neural network that competes with and, in some cases, outperforms such models on sentiment analysis and factoid question answering tasks while taking only a fraction of the training time.
Ask Me Anything: Dynamic Memory Networks for Natural Language Processing
The dynamic memory network (DMN), a neural network architecture which processes input sequences and questions, forms episodic memories, and generates relevant answers, is introduced.
Adversarial Example Generation with Syntactically Controlled Paraphrase Networks
A combination of automated and human evaluations show that SCPNs generate paraphrases that follow their target specifications without decreasing paraphrase quality when compared to baseline (uncontrolled) paraphrase systems.
Search-based Neural Structured Learning for Sequential Question Answering
This work proposes a novel dynamic neural semantic parsing framework trained using a weakly supervised reward-guided search that effectively leverages the sequential context to outperform state-of-the-art QA systems that are designed to answer highly complex questions.
Pathologies of Neural Models Make Interpretations Difficult
- Shi Feng, Eric Wallace, Alvin Grissom II, Mohit Iyyer, Pedro Rodriguez, Jordan L. Boyd-Graber
- Computer ScienceEMNLP
- 20 April 2018
This work uses input reduction, which iteratively removes the least important word from the input, to expose pathological behaviors of neural models: the remaining words appear nonsensical to humans and are not the ones determined as important by interpretation methods.
Unsupervised Latent Tree Induction with Deep Inside-Outside Recursive Auto-Encoders
DIORA is introduced, a fully-unsupervised method for discovering syntax that simultaneously learns representations for constituents within the induced tree that outperforms previously reported results for unsupervised binary constituency parsing on the benchmark WSJ dataset.
A Neural Network for Factoid Question Answering over Paragraphs
- Mohit Iyyer, Jordan L. Boyd-Graber, L. Claudino, R. Socher, Hal Daumé
- Computer ScienceEMNLP
- 1 October 2014
This work introduces a recursive neural network model, qanta, that can reason over question text input by modeling textual compositionality and applies it to a dataset of questions from a trivia competition called quiz bowl.
Political Ideology Detection Using Recursive Neural Networks
A RNN framework is applied to the task of identifying the political position evinced by a sentence to show the importance of modeling subsentential elements and outperforms existing models on a newly annotated dataset and an existing dataset.