• Publications
  • Influence
Reading Tea Leaves: How Humans Interpret Topic Models
TLDR
New quantitative methods for measuring semantic meaning in inferred topics are presented, showing that they capture aspects of the model that are undetected by previous measures of model quality based on held-out likelihood.
Deep Unordered Composition Rivals Syntactic Methods for Text Classification
TLDR
This work presents a simple deep neural network that competes with and, in some cases, outperforms such models on sentiment analysis and factoid question answering tasks while taking only a fraction of the training time.
Can You Unpack That? Learning to Rewrite Questions-in-Context
TLDR
This work constructs, CANARD, a dataset of 40,527 questions based on QuAC and trains Seq2Seq models for incorporating context into standalone questions and introduces the task of question-in-context rewriting.
Adding dense, weighted connections to WordNet
TLDR
WORDNET, a ubiquitous tool for natural language processing, suffers from sparsity of connections between its component concepts, so a subset of the connections between 1000 hand-chosen synsets was assigned a value of “evocation” representing how much the first concept brings to mind the second.
Pathologies of Neural Models Make Interpretations Difficult
TLDR
This work uses input reduction, which iteratively removes the least important word from the input, to expose pathological behaviors of neural models: the remaining words appear nonsensical to humans and are not the ones determined as important by interpretation methods.
A Neural Network for Factoid Question Answering over Paragraphs
TLDR
This work introduces a recursive neural network model, qanta, that can reason over question text input by modeling textual compositionality and applies it to a dataset of questions from a trivia competition called quiz bowl.
Opponent Modeling in Deep Reinforcement Learning
TLDR
Inspired by the recent success of deep reinforcement learning, this work presents neural-based models that jointly learn a policy and the behavior of opponents, and uses a Mixture-of-Experts architecture to encode observation of the opponents into a deep Q-Network.
Maximum Likelihood
In statistics, maximum-likelihood estimation (MLE) is a method of estimating the parameters of a statistical model. When applied to a data set and given a statistical model, maximum-likelihood
A Topic Model for Word Sense Disambiguation
TLDR
A probabilistic posterior inference algorithm for simultaneously disambiguating a corpus and learning the domains in which to consider each word is developed.
Political Ideology Detection Using Recursive Neural Networks
TLDR
A RNN framework is applied to the task of identifying the political position evinced by a sentence to show the importance of modeling subsentential elements and outperforms existing models on a newly annotated dataset and an existing dataset.
...
1
2
3
4
5
...