• Publications
  • Influence
Style Transfer from Non-Parallel Text by Cross-Alignment
This paper proposes a method that leverages refined alignment of latent representations to perform style transfer on the basis of non-parallel text, and demonstrates the effectiveness of this cross-alignment method on three tasks: sentiment modification, decipherment of word substitution ciphers, and recovery of word order. Expand
Using Lexical Chains for Text Summarization
Empirical results on the identification of strong chains and of significant sentences are presented in this paper, and plans to address short-comings are briefly presented. Expand
Modeling Local Coherence: An Entity-Based Approach
This article re-conceptualize coherence assessment as a learning task and shows that the proposed entity-grid representation of discourse is well-suited for ranking-based generation and text classification tasks. Expand
Junction Tree Variational Autoencoder for Molecular Graph Generation
The junction tree variational autoencoder generates molecular graphs in two phases, by first generating a tree-structured scaffold over chemical substructures, and then combining them into a molecule with a graph message passing network, which allows for incrementally expand molecules while maintaining chemical validity at every step. Expand
Rationalizing Neural Predictions
The approach combines two modular components, generator and encoder, which are trained to operate well together and specifies a distribution over text fragments as candidate rationales and these are passed through the encoder for prediction. Expand
Catching the Drift: Probabilistic Content Models, with Applications to Generation and Summarization
An effective knowledge-lean method for learning content models from unannotated documents is presented, utilizing a novel adaptation of algorithms for Hidden Markov Models and applied to two complementary tasks: information ordering and extractive summarization. Expand
Learning to Automatically Solve Algebra Word Problems
We present an approach for automatically learning to solve algebra word problems. Our algorithm reasons across sentence boundaries to construct and solve a system of linear equations, whileExpand
Language Understanding for Text-based Games using Deep Reinforcement Learning
This paper employs a deep reinforcement learning framework to jointly learn state representations and action policies using game rewards as feedback to map text descriptions into vector representations that capture the semantics of the game states. Expand
Bayesian Unsupervised Topic Segmentation
A novel Bayesian approach to unsupervised topic segmentation is described, showing that lexical cohesion can be placed in a Bayesian context by modeling the words in each topic segment as draws from a multinomial language model associated with the segment; maximizing the observation likelihood in such a model yields a lexically-cohesive segmentation. Expand
Using Universal Linguistic Knowledge to Guide Grammar Induction
This work presents an approach to grammar induction that utilizes syntactic universals to improve dependency parsing across a range of languages and outperforms state-of-the-art unsupervised methods by a significant margin. Expand