• Publications
  • Influence
A large annotated corpus for learning natural language inference
TLDR
The Stanford Natural Language Inference corpus is introduced, a new, freely available collection of labeled sentence pairs, written by humans doing a novel grounded task based on image captioning, which allows a neural network-based model to perform competitively on natural language inference benchmarks for the first time.
Position-aware Attention and Supervised Data Improve Slot Filling
TLDR
An effective new model is proposed, which combines an LSTM sequence model with a form of entity position-aware attention that is better suited to relation extraction that builds TACRED, a large supervised relation extraction dataset obtained via crowdsourcing and targeted towards TAC KBP relations.
Leveraging Linguistic Structure For Open Domain Information Extraction
TLDR
This work replaces this large pattern set with a few patterns for canonically structured sentences, and shifts the focus to a classifier which learns to extract self-contained clauses from longer sentences to determine the maximally specific arguments for each candidate triple.
A Simple Domain-Independent Probabilistic Approach to Generation
TLDR
A simple, robust generation system which performs content selection and surface realization in a unified, domain-independent framework and deployed in three different domains, obtaining results comparable to state-of-the-art domain-specific systems both in terms of BLEU scores and human evaluation.
Combining Distant and Partial Supervision for Relation Extraction
TLDR
This work presents an approach for providing partial supervision to a distantly supervised relation extractor using a small number of carefully selected examples, and proposes a novel criterion to sample examples which are both uncertain and representative.
Bootstrapped Self Training for Knowledge Base Population
TLDR
This work proposes bootstrapped selftraining to capture the benefits of both systems: the precision of patterns and the generalizability of trained models and shows that training on the output of patterns drastically improves performance over the patterns.
Evaluating Word Embeddings Using a Representative Suite of Practical Tasks
TLDR
This work proposes evaluating word embeddings in vivo by evaluating them on a suite of popular downstream tasks by using simple models with few tuned hyperparameters.
NaturalLI: Natural Logic Inference for Common Sense Reasoning
TLDR
This work proposes NaturalLI: a Natural Logic inference system for inferring common sense facts, for instance, that cats have tails or tomatoes are round from a very large database of known facts, and shows it is able to capture strict Natural Logic inferences on the FraCaS test suite.
Combining Natural Logic and Shallow Reasoning for Question Answering
TLDR
This work extends the breadth of inferences afforded by natural logic to include relational entailment and meronymy and trains an evaluation function – akin to gameplaying – to evaluate the expected truth of candidate premises on the fly.
Stanford's 2014 Slot Filling Systems
TLDR
Stanford’s entry in the TACKBP 2014 Slot Filling challenge is described and the impact of learned and hard-coded patterns on performance for slot filling is evaluated.
...
...