A Survey on Explainability in Machine Reading Comprehension
@article{Thayaparan2020ASO, title={A Survey on Explainability in Machine Reading Comprehension}, author={Mokanarangan Thayaparan and Marco Valentino and Andr{\'e} Freitas}, journal={ArXiv}, year={2020}, volume={abs/2010.00389} }
This paper presents a systematic review of benchmarks and approaches for explainability in Machine Reading Comprehension (MRC). We present how the representation and inference challenges evolved and the steps which were taken to tackle these challenges. We also present the evaluation methodologies to assess the performance of explainable systems. In addition, we identify persisting open research questions and highlight critical directions for future work.
35 Citations
Teach Me to Explain: A Review of Datasets for Explainable Natural Language Processing
- Computer ScienceNeurIPS Datasets and Benchmarks
- 2021
This review identifies 61 datasets with three predominant classes of textual expla6 nations (highlights, free-text, and structured), organize the literature on annotating each type, identify strengths and shortcomings of existing collection methodologies, and give recommendations for collecting EXNLP datasets in the future.
Summarize-then-Answer: Generating Concise Explanations for Multi-hop Reading Comprehension
- Computer ScienceEMNLP
- 2021
This work proposes an abstractive approach to generate a question-focused, abstractive summary of input paragraphs and then feed it to an RC system, which can generate more compact explanations than an extractive explainer with limited supervision while maintaining sufficiency.
A Comprehensive Survey on Multi-hop Machine Reading Comprehension Approaches
- Computer Science, BusinessArXiv
- 2022
This study aims to investigate recent advances in the multi-hop MRC approaches based on 31 studies from 2018 to 2022 and presents a fine-grain comprehensive comparison of the models and techniques.
Explainable Inference Over Grounding-Abstract Chains for Science Questions
- Computer ScienceFINDINGS
- 2021
This paper frames question answering as a natural language abductive reasoning problem, constructing plausible explanations for each candidate answer and then selecting the candidate with the best explanation as the final answer by employing a linear programming formalism.
Do Natural Language Explanations Represent Valid Logical Arguments? Verifying Entailment in Explainable NLI Gold Standards
- PhilosophyIWCS
- 2021
A systematic annotation methodology, named Explanation Entailment Verification (EEV), is proposed, to quantify the logical validity of human-annotated explanations, and confirms that the inferential properties of explanations are still poorly formalised and understood.
Active entailment encoding for explanation tree construction using parsimonious generation of hard negatives
- Computer ScienceArXiv
- 2022
This paper frames the construction of entailment trees as a sequence of active premise selection steps, i.e., for each intermediate node in an explanation tree, the expert needs to annotate positive and negative examples of premise facts from a large candidate list, and iteratively tunes pre–trained Transformer models with the resulting positive and tightly controlled negative samples.
Teach Me to Explain: A Review of Datasets for Explainable NLP
- Computer ScienceArXiv
- 2021
This review identifies three predominant classes of explanations (highlights, free-text, and structured), organize the literature on annotating each type, point to what has been learned to date, and give recommendations for collecting EXNLP datasets in the future.
Encoding Explanatory Knowledge for Zero-shot Science Question Answering
- Computer ScienceIWCS
- 2021
It is demonstrated that N-XKT is able to improve accuracy and generalization on science Question Answering (QA) and can be fine-tuned on a target QA dataset, enabling faster convergence and more accurate results.
Rationalization for Explainable NLP: A Survey
- Computer ScienceArXiv
- 2023
This survey presents available methods, explainable evaluations, code, and datasets used across various NLP tasks that use rationalization, and a new subfield in Explainable AI (XAI), namely, Rational AI (RAI), is introduced to advance the current state of rationalization.
ExplanationLP: Abductive Reasoning for Explainable Science Question Answering
- Computer ScienceArXiv
- 2020
A novel approach for answering and explaining multiple-choice science questions by reasoning on grounding and abstract inference chains that elicits explanations by constructing a weighted graph of relevant facts for each candidate answer and extracting the facts that satisfy certain structural and semantic constraints.
References
SHOWING 1-10 OF 118 REFERENCES
Machine Reading Comprehension: a Literature Review
- Computer ScienceArXiv
- 2019
This article summarizes recent advances in MRC, mainly focusing on two aspects (i.e., corpus and techniques); the specific characteristics of various MRC corpus are listed and compared.
A Framework for Evaluation of Machine Reading Comprehension Gold Standards
- Computer ScienceLREC
- 2020
A unifying framework to systematically investigate the present linguistic features, required reasoning and background knowledge and factual correctness on one hand, and the presence of lexical cues as a lower bound for the requirement of understanding on the other hand is proposed.
A Survey on Neural Machine Reading Comprehension
- Computer ScienceArXiv
- 2019
This paper aims to present how to utilize the Neural Network to build a Reader and introduce some classic models, and point out the defects of existing models and future research directions.
A Survey on Machine Reading Comprehension Systems
- Computer ScienceNatural Language Engineering
- 2022
It is demonstrated that the focus of research has changed in recent years from answer extraction to answer generation, from single- to multi-document reading comprehension, and from learning from scratch to using pre-trained word vectors.
Looking Beyond the Surface: A Challenge Set for Reading Comprehension over Multiple Sentences
- Computer ScienceNAACL
- 2018
The dataset is the first to study multi-sentence inference at scale, with an open-ended set of question types that requires reasoning skills, and finds human solvers to achieve an F1-score of 88.1%.
DROP: A Reading Comprehension Benchmark Requiring Discrete Reasoning Over Paragraphs
- Computer ScienceNAACL
- 2019
A new reading comprehension benchmark, DROP, which requires Discrete Reasoning Over the content of Paragraphs, and presents a new model that combines reading comprehension methods with simple numerical reasoning to achieve 51% F1.
What’s in an Explanation? Characterizing Knowledge and Inference Requirements for Elementary Science Exams
- Computer ScienceCOLING
- 2016
This work develops an explanation-based analysis of knowledge and inference requirements, which supports a fine-grained characterization of the challenges, and compares a retrieval and an inference solver on 212 questions.
R3: A Reading Comprehension Benchmark Requiring Reasoning Processes
- Computer ScienceArXiv
- 2020
This work introduces a formalism for reasoning over unstructured text, namely Text Reasoning Meaning Representation (TRMR), which consists of three phrases, which is expressive enough to characterize the reasoning process to answer reading comprehension questions.
R4C: A Benchmark for Evaluating RC Systems to Get the Right Answer for the Right Reason
- Computer ScienceACL
- 2020
This work creates and publicly releases the R4C dataset, the first, quality-assured dataset consisting of 4.6k questions, each of which is annotated with 3 reference derivations, and presents a reliable, crowdsourced framework for scalably annotating RC datasets with derivations.
Machine Reading Comprehension: The Role of Contextualized Language Models and Beyond
- Computer ScienceArXiv
- 2020
It is arrived at that 1) MRC boosts the progress from language processing to understanding; 2) the rapid improvement of MRC systems greatly benefits from the development of CLMs; 3) the theme of M RC is gradually moving from shallow text matching to cognitive reasoning.