• Publications
  • Influence
Glove: Global Vectors for Word Representation
TLDR
We propose a new global logbilinear regression model that combines the advantages of the two major model families in the literature: global matrix factorization and local context window methods. Expand
  • 17,123
  • 2801
  • PDF
Introduction to information retrieval
TLDR
Class-tested and coherent, this groundbreaking new textbook teaches web-era information retrieval, including web search and the related areas of text classification and text clustering from basic concepts. Expand
  • 9,249
  • 993
Recursive Deep Models for Semantic Compositionality Over a Sentiment Treebank
TLDR
We introduce the Stanford Sentiment Treebank and a powerful Recursive Neural Tensor Network that can accurately predict the compositional semantic effects present in this new corpus. Expand
  • 4,246
  • 679
  • PDF
The Stanford CoreNLP Natural Language Processing Toolkit
TLDR
We describe the design and use of the Stanford CoreNLP toolkit, an extensible pipeline that provides core natural language analysis. Expand
  • 5,173
  • 619
  • PDF
Effective Approaches to Attention-based Neural Machine Translation
TLDR
This paper examines two simple and effective classes of attentional mechanism: a global approach which always attends to all source words and a local one that only looks at a subset of source words at a time. Expand
  • 4,646
  • 531
  • PDF
A large annotated corpus for learning natural language inference
TLDR
We introduce the Stanford Natural Language Inference corpus, a new, freely available collection of labeled sentence pairs, written by humans in a grounded, naturalistic context. Expand
  • 1,731
  • 468
  • PDF
Get To The Point: Summarization with Pointer-Generator Networks
TLDR
We propose a novel architecture that augments the standard sequence-to-sequence attentional model in two orthogonal ways. Expand
  • 1,657
  • 468
  • PDF
Foundations of statistical natural language processing
TLDR
This foundational text is the first comprehensive introduction to statistical natural language processing (NLP). Expand
  • 6,693
  • 413
  • PDF
Incorporating Non-local Information into Information Extraction Systems by Gibbs Sampling
TLDR
We show how to solve this dilemma with Gibbs sampling, a simple Monte Carlo method used to perform approximate inference in factored probabilistic models. Expand
  • 3,017
  • 375
  • PDF
Improved Semantic Representations From Tree-Structured Long Short-Term Memory Networks
TLDR
We introduce a generalization of LSTMs to tree-structured network topologies and show its superiority for representing sentence meaning over a sequential LSTM. Expand
  • 2,057
  • 338
  • PDF
...
1
2
3
4
5
...