Visually grounded generation of entailments from premises

  title={Visually grounded generation of entailments from premises},
  author={Somayeh Jafaritazehjani and Albert Gatt and Marc Tanti},
Natural Language Inference (NLI) is the task of determining the semantic relationship between a premise and a hypothesis. In this paper, we focus on the generation of hypotheses from premises in a multimodal setting, to generate a sentence (hypothesis) given an image and/or its description (premise) as the input. The main goals of this paper are (a) to investigate whether it is reasonable to frame NLI as a generation task; and (b) to consider the degree to which grounding textual premises in… Expand


Grounded Textual Entailment
This paper argues for a visually-grounded version of the Textual Entailment task, and asks whether models can perform better if, in addition to P and H, there is also an image (corresponding to the relevant “world” or “situation”). Expand
Visual Entailment: A Novel Task for Fine-Grained Image Understanding
A new inference task, Visual Entailed (VE) - consisting of image-sentence pairs whereby a premise is defined by an image, rather than a natural language sentence as in traditional Textual Entailment tasks is introduced. Expand
Annotation Artifacts in Natural Language Inference Data
It is shown that a simple text categorization model can correctly classify the hypothesis alone in about 67% of SNLI and 53% of MultiNLI, and that specific linguistic phenomena such as negation and vagueness are highly correlated with certain inference classes. Expand
A large annotated corpus for learning natural language inference
The Stanford Natural Language Inference corpus is introduced, a new, freely available collection of labeled sentence pairs, written by humans doing a novel grounded task based on image captioning, which allows a neural network-based model to perform competitively on natural language inference benchmarks for the first time. Expand
Generating Natural Language Inference Chains
A new task is proposed that measures how well a model can generate an entailed sentence from a source sentence and takes entailment-pairs of the Stanford Natural Language Inference corpus and trains an LSTM with attention, and applies this model recursively to input-output pairs, thereby generating natural language inference chains. Expand
Recognising Textual Entailment with Logical Inference
This work incorporates model building, a technique borrowed from automated reasoning, and shows that it is a useful robust method to approximate entailment, and uses machine learning to combine these deep semantic analysis techniques with simple shallow word overlap. Expand
A Survey of Paraphrasing and Textual Entailment Methods
Key ideas from the two areas of paraphrasing and textual entailment are summarized by considering in turn recognition, generation, and extraction methods, also pointing to prominent articles and resources. Expand
The Fourth PASCAL Recognizing Textual Entailment Challenge
The preparation of the dataset is described, and an overview of the results achieved by the participating systems are given. Expand
Reasoning about Entailment with Neural Attention
This paper proposes a neural model that reads two sentences to determine entailment using long short-term memory units and extends this model with a word-by-word neural attention mechanism that encourages reasoning over entailments of pairs of words and phrases, and presents a qualitative analysis of attention weights produced by this model. Expand
A Broad-Coverage Challenge Corpus for Sentence Understanding through Inference
The Multi-Genre Natural Language Inference corpus is introduced, a dataset designed for use in the development and evaluation of machine learning models for sentence understanding and shows that it represents a substantially more difficult task than does the Stanford NLI corpus. Expand