Learn More
Understanding entailment and contradiction is fundamental to understanding natural language, and inference about entail-ment and contradiction is a valuable testing ground for the development of semantic representations. However, machine learning research in this area has been dramatically limited by the lack of large-scale resources. To address this, we(More)
The standard recurrent neural network language model (rnnlm) generates sentences one word at a time and does not work from an explicit global sentence representation. In this work, we introduce and study an rnn-based variational au-toencoder generative model that incorporates distributed latent representations of entire sentences. This factorization allows(More)
We present a gold standard annotation of syntactic dependencies in the English Web Treebank corpus using the Stanford Dependencies standard. This resource addresses the lack of a gold standard dependency treebank for English, as well as the limited availability of gold standard syntactic annotations for informal genres of English text. We also present(More)
Tree-structured neural networks exploit valuable syntactic parse information as they interpret the meanings of sentences. However, they suffer from two key technical problems that make them slow and unwieldy for large-scale NLP tasks: they usually operate on parsed sentences and they do not directly support batched computation. We address these issues by(More)
Tree-structured recursive neural networks (TreeRNNs) for sentence meaning have been successful for many applications, but it remains an open question whether the fixed-length representations that they learn can support tasks as demanding as logical deduction. We pursue this question by evaluating whether two such models— plain TreeRNNs and tree-structured(More)
The Stanford dependency scheme aims to provide a simple and intuitive but linguistically sound way of annotating the dependencies between words in a sentence. In this paper, we address two limitations the scheme has suffered from: First, despite providing good coverage of core grammatical relations, the scheme has not offered explicit analyses of more(More)
Tree-structured neural networks encode a particular tree geometry for a sentence in the network design. However, these models have at best only slightly out-performed simpler sequence-based models. We hypothesize that neural sequence models like LSTMs are in fact able to discover and implicitly use recursive com-positional structure, at least for tasks with(More)
• Zaenen et al. (2004)'s annotation scheme and corpus: Note: Some feature selection was inadvertently done before this split was finalized. All relevant experiments have been repeated on the current split. • Maximum entropy classifier (Berger et al., 1996) with three feature bundles: • Bag of words features capture every word in the NP:-HASWD-(POS-tag-)word(More)