• Publications
  • Influence
Recursive Deep Models for Semantic Compositionality Over a Sentiment Treebank
TLDR
A Sentiment Treebank that includes fine grained sentiment labels for 215,154 phrases in the parse trees of 11,855 sentences and presents new challenges for sentiment compositionality, and introduces the Recursive Neural Tensor Network. Expand
A large annotated corpus for learning natural language inference
TLDR
The Stanford Natural Language Inference corpus is introduced, a new, freely available collection of labeled sentence pairs, written by humans doing a novel grounded task based on image captioning, which allows a neural network-based model to perform competitively on natural language inference benchmarks for the first time. Expand
Learning Word Vectors for Sentiment Analysis
TLDR
This work presents a model that uses a mix of unsupervised and supervised techniques to learn word vectors capturing semantic term--document information as well as rich sentiment content, and finds it out-performs several previously introduced methods for sentiment classification. Expand
The logic of conventional implicatures
1. Introduction 2. A Preliminary Case for Conventional Implicatures 3. A Logic for Conventional Implicatures 4. Supplements 5. Expressive Content 6. The Supplement Relation: A Syntactic Analysis 7. AExpand
The expressive dimension
Abstract Expressives like damn and bastard have, when uttered, an immediate and powerful impact on the context. They are performative, often destructively so. They are revealing of the perspectiveExpand
A Fast Unified Model for Parsing and Sentence Understanding
TLDR
The Stack-augmentedParser-Interpreter NeuralNetwork (SPINN) combines parsing and interpretation within a single tree-sequence hybrid model by integrating tree-structured sentence interpretation into the linear sequential structure of a shiftreduceparser. Expand
A computational approach to politeness with application to social factors
TLDR
A computational framework for identifying linguistic aspects of politeness is proposed, showing that polite Wikipedia editors are more likely to achieve high status through elections, but, once elevated, they become less polite. Expand
No country for old members: user lifecycle and linguistic change in online communities
TLDR
This work proposes a framework for tracking linguistic change as it happens and for understanding how specific users react to these evolving norms and yields new theoretical insights into the evolution of linguistic norms and the complex interplay between community-level and individual-level linguistic change. Expand
The Life and Death of Discourse Entities: Identifying Singleton Mentions
TLDR
A logistic regression model is built for predicting the singleton/coreferent distinction, drawing on linguistic insights about how discourse entity lifespans are affected by syntactic and semantic features. Expand
Recursive Neural Networks Can Learn Logical Semantics
TLDR
This work generates artificial data from a logical grammar and uses it to evaluate the models' ability to learn to handle basic relational reasoning, recursive structures, and quantification, suggesting that they can learn suitable representations for logical inference in natural language. Expand
...
1
2
3
4
5
...