• Corpus ID: 15695877

WIEN : Wordwise Inference and Entailment Now Or : How We Taught Machines to Recognize Natural Language Inference

  title={WIEN : Wordwise Inference and Entailment Now Or : How We Taught Machines to Recognize Natural Language Inference},
  author={Chris Billovits and Mihail Eric and Christine Guthrie},
The problem of inferring textual entailment relations is a fundamental challenge in natural language understanding. Building systems with the ability to recognize entailment relationships across sentences is a crucial step in achieving complete machine-level semantic understanding. We propose a multi-label classification model, implementing a random forest classifier with a carefully engineered and selected collection of linguistic and semantic features, to tackle this problem. Our system… 

Tables from this paper


A large annotated corpus for learning natural language inference
The Stanford Natural Language Inference corpus is introduced, a new, freely available collection of labeled sentence pairs, written by humans doing a novel grounded task based on image captioning, which allows a neural network-based model to perform competitively on natural language inference benchmarks for the first time.
A Phrase-Based Alignment Model for Natural Language Inference
The MANLI system is presented, a new NLI aligner designed to address the alignment problem, which uses a phrase-based alignment representation, exploits external lexical resources, and capitalizes on a new set of supervised training data.
SemEval-2014 Task 1: Evaluation of Compositional Distributional Semantic Models on Full Sentences through Semantic Relatedness and Textual Entailment
This paper presents the task on the evaluation of Compositional Distributional Semantics Models on full sentences organized for the first time within SemEval2014, and attracted 21 teams, most of which participated in both subtasks.
A Latent Discriminative Model for Compositional Entailment Relation Recognition using Natural Logic
This paper proposes a latent discriminative model that unifies a statistical framework and a theory of Natural Logic to capture complex interactions between linguistic phenomena and suggests that alignments can be detrimental to performance if used in a manner that prevents the learning of globally optimal alignments.
Recursive Neural Networks for Learning Logical Semantics
This work evaluates whether each of two classes of neural model can correctly learn relationships such as entailment and contradiction between pairs of sentences, and finds that the plain RNN achieves only mixed results on all three experiments, whereas the stronger RNTN model generalizes well in every setting and appears capable of learning suitable representations for natural language logical inference.
A Survey of Paraphrasing and Textual Entailment Methods
Key ideas from the two areas of paraphrasing and textual entailment are summarized by considering in turn recognition, generation, and extraction methods, also pointing to prominent articles and resources.
The PASCAL Recognising Textual Entailment Challenge
This paper presents the Third PASCAL Recognising Textual Entailment Challenge (RTE-3), providing an overview of the dataset creating methodology and the submitted systems. In creating this year's
GloVe: Global Vectors for Word Representation
A new global logbilinear regression model that combines the advantages of the two major model families in the literature: global matrix factorization and local context window methods and produces a vector space with meaningful substructure.
The Berkeley FrameNet Project
This report will present the project's goals and workflow, and information about the computational tools that have been adapted or created in-house for this work.
Illinois-LH: A Denotational and Distributional Approach to Semantics
This paper describes and analyzes our SemEval 2014 Task 1 system. Its features are based on distributional and denotational similarities; word alignment; negation; and hypernym/hyponym, synonym, and