• Corpus ID: 222090978

A Survey on Explainability in Machine Reading Comprehension

  title={A Survey on Explainability in Machine Reading Comprehension},
  author={Mokanarangan Thayaparan and Marco Valentino and Andr{\'e} Freitas},
This paper presents a systematic review of benchmarks and approaches for explainability in Machine Reading Comprehension (MRC). We present how the representation and inference challenges evolved and the steps which were taken to tackle these challenges. We also present the evaluation methodologies to assess the performance of explainable systems. In addition, we identify persisting open research questions and highlight critical directions for future work. 

Figures and Tables from this paper

Teach Me to Explain: A Review of Datasets for Explainable Natural Language Processing

This review identifies 61 datasets with three predominant classes of textual expla6 nations (highlights, free-text, and structured), organize the literature on annotating each type, identify strengths and shortcomings of existing collection methodologies, and give recommendations for collecting EXNLP datasets in the future.

Summarize-then-Answer: Generating Concise Explanations for Multi-hop Reading Comprehension

This work proposes an abstractive approach to generate a question-focused, abstractive summary of input paragraphs and then feed it to an RC system, which can generate more compact explanations than an extractive explainer with limited supervision while maintaining sufficiency.

Explainable Inference Over Grounding-Abstract Chains for Science Questions

This paper frames question answering as a natural language abductive reasoning problem, constructing plausible explanations for each candidate answer and then selecting the candidate with the best explanation as the final answer by employing a linear programming formalism.

Do Natural Language Explanations Represent Valid Logical Arguments? Verifying Entailment in Explainable NLI Gold Standards

A systematic annotation methodology, named Explanation Entailment Verification (EEV), is proposed, to quantify the logical validity of human-annotated explanations, and confirms that the inferential properties of explanations are still poorly formalised and understood.

Active entailment encoding for explanation tree construction using parsimonious generation of hard negatives

This paper frames the construction of entailment trees as a sequence of active premise selection steps, i.e., for each intermediate node in an explanation tree, the expert needs to annotate positive and negative examples of premise facts from a large candidate list, and iteratively tunes pre–trained Transformer models with the resulting positive and tightly controlled negative samples.

Teach Me to Explain: A Review of Datasets for Explainable NLP

This review identifies three predominant classes of explanations (highlights, free-text, and structured), organize the literature on annotating each type, point to what has been learned to date, and give recommendations for collecting EXNLP datasets in the future.

Encoding Explanatory Knowledge for Zero-shot Science Question Answering

It is demonstrated that N-XKT is able to improve accuracy and generalization on science Question Answering (QA) and can be fine-tuned on a target QA dataset, enabling faster convergence and more accurate results.

ExplanationLP: Abductive Reasoning for Explainable Science Question Answering

A novel approach for answering and explaining multiple-choice science questions by reasoning on grounding and abstract inference chains that elicits explanations by constructing a weighted graph of relevant facts for each candidate answer and extracting the facts that satisfy certain structural and semantic constraints.

What’s Wrong with Deep Learning for Meaning Understanding

This paper will look into the internal procedures adopted by current DNNs to cope with the enormous and untreatable dimension of vocabulary size due to the presence of rare words and the use of subwords or n-gram character sequences.

Unification-based Reconstruction of Multi-hop Explanations for Science Questions

A novel framework for reconstructing multi-hop explanations in science Question Answering by integrating lexical relevance with the notion of unification power, estimated analysing explanations for similar questions in the corpus of scientific explanations is presented.



Machine Reading Comprehension: a Literature Review

This article summarizes recent advances in MRC, mainly focusing on two aspects (i.e., corpus and techniques); the specific characteristics of various MRC corpus are listed and compared.

A Framework for Evaluation of Machine Reading Comprehension Gold Standards

A unifying framework to systematically investigate the present linguistic features, required reasoning and background knowledge and factual correctness on one hand, and the presence of lexical cues as a lower bound for the requirement of understanding on the other hand is proposed.

A Survey on Neural Machine Reading Comprehension

This paper aims to present how to utilize the Neural Network to build a Reader and introduce some classic models, and point out the defects of existing models and future research directions.

A Survey on Machine Reading Comprehension Systems

It is demonstrated that the focus of research has changed in recent years from answer extraction to answer generation, from single- to multi-document reading comprehension, and from learning from scratch to using pre-trained word vectors.

Looking Beyond the Surface: A Challenge Set for Reading Comprehension over Multiple Sentences

The dataset is the first to study multi-sentence inference at scale, with an open-ended set of question types that requires reasoning skills, and finds human solvers to achieve an F1-score of 88.1%.

DROP: A Reading Comprehension Benchmark Requiring Discrete Reasoning Over Paragraphs

A new reading comprehension benchmark, DROP, which requires Discrete Reasoning Over the content of Paragraphs, and presents a new model that combines reading comprehension methods with simple numerical reasoning to achieve 51% F1.

What’s in an Explanation? Characterizing Knowledge and Inference Requirements for Elementary Science Exams

This work develops an explanation-based analysis of knowledge and inference requirements, which supports a fine-grained characterization of the challenges, and compares a retrieval and an inference solver on 212 questions.

R3: A Reading Comprehension Benchmark Requiring Reasoning Processes

This work introduces a formalism for reasoning over unstructured text, namely Text Reasoning Meaning Representation (TRMR), which consists of three phrases, which is expressive enough to characterize the reasoning process to answer reading comprehension questions.

R4C: A Benchmark for Evaluating RC Systems to Get the Right Answer for the Right Reason

This work creates and publicly releases the R4C dataset, the first, quality-assured dataset consisting of 4.6k questions, each of which is annotated with 3 reference derivations, and presents a reliable, crowdsourced framework for scalably annotating RC datasets with derivations.

Machine Reading Comprehension: The Role of Contextualized Language Models and Beyond

It is arrived at that 1) MRC boosts the progress from language processing to understanding; 2) the rapid improvement of MRC systems greatly benefits from the development of CLMs; 3) the theme of M RC is gradually moving from shallow text matching to cognitive reasoning.