Reading Tea Leaves: How Humans Interpret Topic Models
- Jonathan Chang, Jordan L. Boyd-Graber, S. Gerrish, Chong Wang, D. Blei
- Computer ScienceNIPS
- 7 December 2009
New quantitative methods for measuring semantic meaning in inferred topics are presented, showing that they capture aspects of the model that are undetected by previous measures of model quality based on held-out likelihood.
Deep Unordered Composition Rivals Syntactic Methods for Text Classification
- Mohit Iyyer, Varun Manjunatha, Jordan L. Boyd-Graber, Hal Daumé
- Computer ScienceAnnual Meeting of the Association for…
- 1 July 2015
This work presents a simple deep neural network that competes with and, in some cases, outperforms such models on sentiment analysis and factoid question answering tasks while taking only a fraction of the training time.
Can You Unpack That? Learning to Rewrite Questions-in-Context
- Ahmed Elgohary, Denis Peskov, Jordan L. Boyd-Graber
- Computer ScienceConference on Empirical Methods in Natural…
- 1 November 2019
This work constructs, CANARD, a dataset of 40,527 questions based on QuAC and trains Seq2Seq models for incorporating context into standalone questions and introduces the task of question-in-context rewriting.
Pathologies of Neural Models Make Interpretations Difficult
- Shi Feng, Eric Wallace, Alvin Grissom II, Mohit Iyyer, Pedro Rodriguez, Jordan L. Boyd-Graber
- Computer ScienceConference on Empirical Methods in Natural…
- 20 April 2018
This work uses input reduction, which iteratively removes the least important word from the input, to expose pathological behaviors of neural models: the remaining words appear nonsensical to humans and are not the ones determined as important by interpretation methods.
Opponent Modeling in Deep Reinforcement Learning
- He He, Jordan L. Boyd-Graber
- Computer ScienceInternational Conference on Machine Learning
- 19 June 2016
Inspired by the recent success of deep reinforcement learning, this work presents neural-based models that jointly learn a policy and the behavior of opponents, and uses a Mixture-of-Experts architecture to encode observation of the opponents into a deep Q-Network.
Political Ideology Detection Using Recursive Neural Networks
- Mohit Iyyer, P. Enns, Jordan L. Boyd-Graber, P. Resnik
- Computer ScienceAnnual Meeting of the Association for…
- 1 June 2014
A RNN framework is applied to the task of identifying the political position evinced by a sentence to show the importance of modeling subsentential elements and outperforms existing models on a newly annotated dataset and an existing dataset.
Adding dense, weighted connections to WordNet
- Jordan L. Boyd-Graber, C. Fellbaum, D. Osherson, R. Schapire
- Computer Science
- 2005
W ORD N ET, a ubiquitous tool for natural language processing, suffers from sparsity of connections between its component concepts (synsets), so a subset of the connections between 1000 hand-chosen synsets was assigned a value of “evocation” representing how much the first concept brings to mind the second.
Cold-start Active Learning through Self-Supervised Language Modeling
- Michelle Yuan, Hsuan-Tien Lin, Jordan L. Boyd-Graber
- Computer ScienceConference on Empirical Methods in Natural…
- 19 October 2020
With BERT, a simple strategy based on the masked language modeling loss that minimizes labeling costs for text classification is developed and reaches higher accuracy within less sampling iterations and computation time.
A Neural Network for Factoid Question Answering over Paragraphs
- Mohit Iyyer, Jordan L. Boyd-Graber, L. Claudino, R. Socher, Hal Daumé
- Computer ScienceConference on Empirical Methods in Natural…
- 1 October 2014
This work introduces a recursive neural network model, qanta, that can reason over question text input by modeling textual compositionality and applies it to a dataset of questions from a trivia competition called quiz bowl.
Beyond LDA: Exploring Supervised Topic Modeling for Depression-Related Language in Twitter
- P. Resnik, William Armstrong, L. Claudino, Thang Nguyen, Viet-An Nguyen, Jordan L. Boyd-Graber
- Computer ScienceCLPsych@HLT-NAACL
- 5 June 2015
This paper explores the use of supervised topic models in the analysis of linguistic signal for detecting depression, providing promising results using several models.
...
...