Author pages are created from data sourced from our academic publisher partnerships and public sources.
Share This Author
Reading Tea Leaves: How Humans Interpret Topic Models
- Jonathan Chang, Jordan L. Boyd-Graber, S. Gerrish, Chong Wang, David M. Blei
- Computer ScienceNIPS
- 7 December 2009
New quantitative methods for measuring semantic meaning in inferred topics are presented, showing that they capture aspects of the model that are undetected by previous measures of model quality based on held-out likelihood.
Deep Unordered Composition Rivals Syntactic Methods for Text Classification
This work presents a simple deep neural network that competes with and, in some cases, outperforms such models on sentiment analysis and factoid question answering tasks while taking only a fraction of the training time.
Can You Unpack That? Learning to Rewrite Questions-in-Context
This work constructs, CANARD, a dataset of 40,527 questions based on QuAC and trains Seq2Seq models for incorporating context into standalone questions and introduces the task of question-in-context rewriting.
Adding dense, weighted connections to WordNet
WORDNET, a ubiquitous tool for natural language processing, suffers from sparsity of connections between its component concepts, so a subset of the connections between 1000 hand-chosen synsets was assigned a value of “evocation” representing how much the first concept brings to mind the second.
Pathologies of Neural Models Make Interpretations Difficult
- Shi Feng, Eric Wallace, Alvin Grissom II, Mohit Iyyer, Pedro Rodriguez, Jordan L. Boyd-Graber
- Computer ScienceEMNLP
- 20 April 2018
This work uses input reduction, which iteratively removes the least important word from the input, to expose pathological behaviors of neural models: the remaining words appear nonsensical to humans and are not the ones determined as important by interpretation methods.
A Neural Network for Factoid Question Answering over Paragraphs
- Mohit Iyyer, Jordan L. Boyd-Graber, L. Claudino, R. Socher, Hal Daumé
- Computer ScienceEMNLP
- 1 October 2014
This work introduces a recursive neural network model, qanta, that can reason over question text input by modeling textual compositionality and applies it to a dataset of questions from a trivia competition called quiz bowl.
Opponent Modeling in Deep Reinforcement Learning
Inspired by the recent success of deep reinforcement learning, this work presents neural-based models that jointly learn a policy and the behavior of opponents, and uses a Mixture-of-Experts architecture to encode observation of the opponents into a deep Q-Network.
In statistics, maximum-likelihood estimation (MLE) is a method of estimating the parameters of a statistical model. When applied to a data set and given a statistical model, maximum-likelihood…
A Topic Model for Word Sense Disambiguation
A probabilistic posterior inference algorithm for simultaneously disambiguating a corpus and learning the domains in which to consider each word is developed.
Political Ideology Detection Using Recursive Neural Networks
A RNN framework is applied to the task of identifying the political position evinced by a sentence to show the importance of modeling subsentential elements and outperforms existing models on a newly annotated dataset and an existing dataset.