Corpus ID: 54040953

e-SNLI: Natural Language Inference with Natural Language Explanations

@inproceedings{Camburu2018eSNLINL,
  title={e-SNLI: Natural Language Inference with Natural Language Explanations},
  author={Oana-Maria Camburu and Tim Rockt{\"a}schel and Thomas Lukasiewicz and Phil Blunsom},
  booktitle={NeurIPS},
  year={2018}
}
In order for machine learning to garner widespread public adoption, models must be able to provide interpretable and robust explanations for their decisions, as well as learn from natural language explanations. [...] Key Method We show that our corpus of explanations can be used for various goals, such as obtaining full sentence justifications of a model's decisions and providing consistent improvements on a range of tasks compared to universal sentence representations learned without explanations. Our dataset…Expand

Figures, Tables, and Topics from this paper

NILE : Natural Language Inference with Faithful Natural Language Explanations
TLDR
This work proposes Natural-language Inference over Label-specific Explanations (NILE), a novel NLI method which utilizes auto-generated label-specific NL explanations to produce labels along with its faithful explanation and demonstrates NILE’s effectiveness over previously reported methods through automated and human evaluation of the produced labels and explanations. Expand
LIREx: Augmenting Language Inference with Relevant Explanation
TLDR
Qualitative analysis shows that LIREx generates flexible, faithful, and relevant NLEs that allow the model to be more robust to spurious explanations, and achieves significantly better performance than previous studies when transferred to the out-of-domain MultiNLI data set. Expand
Generating Token-Level Explanations for Natural Language Inference
TLDR
It is shown that it is possible to generate token-level explanations for NLI without the need for training data explicitly annotated for this purpose, using a simple LSTM architecture and evaluating both LIME and Anchor explanations for this task. Expand
Explaining Text Matching on Neural Natural Language Inference
TLDR
A novel method for training an explanation generator that does not require additional human labels is introduced with the objective of predicting how the model’s classification output will change when parts of the inputs are modified. Expand
Towards Explainable NLP: A Generative Explanation Framework for Text Classification
TLDR
A novel generative explanation framework that learns to make classification decisions and generate fine-grained explanations at the same time and introduces the explainable factor and the minimum risk training approach that learn to generate more reasonable explanations. Expand
Does External Knowledge Help Explainable Natural Language Inference? Automatic Evaluation vs. Human Ratings
TLDR
The largest and most fine-grained explainable NLI crowdsourcing study to date reveals that even large differences in automatic performance scores do neither reflect in human ratings of label, explanation, commonsense nor grammar correctness. Expand
Explain Yourself! Leveraging Language Models for Commonsense Reasoning
TLDR
This work collects human explanations for commonsense reasoning in the form of natural language sequences and highlighted annotations in a new dataset called Common Sense Explanations to train language models to automatically generate explanations that can be used during training and inference in a novel Commonsense Auto-Generated Explanation framework. Expand
Rationale-Inspired Natural Language Explanations with Commonsense
TLDR
This work introduces a unified framework, called REXC (Rationale-inspired Explanations with Commonsense), that extracts rationales as a set of features most responsible for the predictions, expands the extractive rationales using available commonsense resources, and uses the expanded knowledge to generate NLEs. Expand
Learning to Annotate: Modularizing Data Augmentation for Text Classifiers with Natural Language Explanations
TLDR
A novel Neural EXecution Tree (NEXT) framework to augment training data for text classification using NL explanations that generalizes different types of actions specified by the logical forms for labeling data instances, which substantially increases the coverage of each NL explanation. Expand
e-ViL: A Dataset and Benchmark for Natural Language Explanations in Vision-Language Tasks
TLDR
This work introduces e-ViL, a benchmark for explainable vision-language tasks that establishes a unified evaluation framework and provides the first comprehensive comparison of existing approaches that generate NLEs for VL tasks. Expand
...
1
2
3
4
5
...

References

SHOWING 1-10 OF 28 REFERENCES
A large annotated corpus for learning natural language inference
TLDR
The Stanford Natural Language Inference corpus is introduced, a new, freely available collection of labeled sentence pairs, written by humans doing a novel grounded task based on image captioning, which allows a neural network-based model to perform competitively on natural language inference benchmarks for the first time. Expand
Annotation Artifacts in Natural Language Inference Data
TLDR
It is shown that a simple text categorization model can correctly classify the hypothesis alone in about 67% of SNLI and 53% of MultiNLI, and that specific linguistic phenomena such as negation and vagueness are highly correlated with certain inference classes. Expand
Supervised Learning of Universal Sentence Representations from Natural Language Inference Data
TLDR
It is shown how universal sentence representations trained using the supervised data of the Stanford Natural Language Inference datasets can consistently outperform unsupervised methods like SkipThought vectors on a wide range of transfer tasks. Expand
A Broad-Coverage Challenge Corpus for Sentence Understanding through Inference
TLDR
The Multi-Genre Natural Language Inference corpus is introduced, a dataset designed for use in the development and evaluation of machine learning models for sentence understanding and shows that it represents a substantially more difficult task than does the Stanford NLI corpus. Expand
Natural Language Inference over Interaction Space
TLDR
DIIN, a novel class of neural network architectures that is able to achieve high-level understanding of the sentence pair by hierarchically extracting semantic features from interaction space, shows that an interaction tensor (attention weight) contains semantic information to solve natural language inference. Expand
WorldTree: A Corpus of Explanation Graphs for Elementary Science Questions supporting Multi-Hop Inference
TLDR
A corpus of explanations for standardized science exams, a recent challenge task for question answering, is presented and an explanation-centered tablestore is provided, a collection of semi-structured tables that contain the knowledge to construct these elementary science explanations. Expand
Evaluating Compositionality in Sentence Embeddings
TLDR
This work presents a new set of NLI sentence pairs that cannot be solved using only word-level knowledge and instead require some degree of compositionality, and finds that augmenting the training dataset with a new dataset improves performance on a held-out test set without loss of performance on the SNLI test set. Expand
Enhancing and Combining Sequential and Tree LSTM for Natural Language Inference
TLDR
This paper presents a new state-of-the-art result, achieving the accuracy of 88.3% on the standard benchmark, the Stanford Natural Language Inference dataset, through an enhanced sequential encoding model, which outperforms the previous best model that employs more complicated network architectures. Expand
Reasoning about Entailment with Neural Attention
TLDR
This paper proposes a neural model that reads two sentences to determine entailment using long short-term memory units and extends this model with a word-by-word neural attention mechanism that encourages reasoning over entailments of pairs of words and phrases, and presents a qualitative analysis of attention weights produced by this model. Expand
A Decomposable Attention Model for Natural Language Inference
We propose a simple neural architecture for natural language inference. Our approach uses attention to decompose the problem into subproblems that can be solved separately, thus making it triviallyExpand
...
1
2
3
...