Explainable Automated Fact-Checking: A Survey

@inproceedings{Kotonya2020ExplainableAF,
  title={Explainable Automated Fact-Checking: A Survey},
  author={Neema Kotonya and F. Toni},
  booktitle={COLING},
  year={2020}
}
A number of exciting advances have been made in automated fact-checking thanks to increasingly larger datasets and more powerful systems, leading to improvements in the complexity of claims which can be accurately fact-checked. However, despite these advances, there are still desirable functionalities missing from the fact-checking pipeline. In this survey, we focus on the explanation functionality – that is fact-checking systems providing reasons for their predictions. We summarize existing… Expand

Figures and Tables from this paper

A Survey on Automated Fact-Checking
Fact-checking has become increasingly important due to the speed with which both information and misinformation can spread in the modern media ecosystem. Therefore, researchers have been exploringExpand
Automated Fact-Checking for Assisting Human Fact-Checkers
TLDR
The available intelligent technologies that can support the human expert in the different steps of her fact-checking endeavor are surveyed, including identifying claims worth fact- checking; detecting relevant previously fact-checked claims; retrieving relevant evidence to fact-check a claim; and actually verifying a claim. Expand
ClaimHunter: An unattended tool for automated claim detection on Twitter
As political campaigns have moved from traditional media to social networks, fact-checkers must also adapt how they are working. The explosion of information (and disinformation) on social networksExpand
Teach Me to Explain: A Review of Datasets for Explainable Natural Language Processing
Explainable Natural Language Processing (EXNLP) has increasingly focused on 1 collecting human-annotated textual explanations. These explanations are used 2 downstream in three ways: as dataExpand
A Survey on Predicting the Factuality and the Bias of News Media
TLDR
This survey reviews the state of the art on media profiling for factuality and bias, arguing for the need to model them jointly and discusses interesting recent advances in using different information sources and modalities, which go beyond the text of the articles the target news outlet has published. Expand
Is Sparse Attention more Interpretable?
TLDR
It is observed in this setting that inducing sparsity may make it less plausible that attention can be used as a tool for understanding model behavior. Expand
A Survey on Multimodal Disinformation Detection
TLDR
A survey that explores the state-of-the-art on multimodal disinformation detection covering various combinations of modalities: text, images, audio, video, network structure, and temporal information. Expand
FEVEROUS: Fact Extraction and VERification Over Unstructured and Structured information
TLDR
This paper introduces a novel dataset and benchmark, Fact Extraction and VERification Over Unstructured and Structured information (FEVEROUS), which consists of 87,026 verified claims and develops a baseline for verifying claims against text and tables which predicts both the correct evidence and verdict for 18% of the claims. Expand
Teach Me to Explain: A Review of Datasets for Explainable NLP
TLDR
This review identifies three predominant classes of explanations (highlights, free-text, and structured), organize the literature on annotating each type, point to what has been learned to date, and give recommendations for collecting EXNLP datasets in the future. Expand

References

SHOWING 1-10 OF 76 REFERENCES
Generating Fact Checking Explanations
TLDR
This paper provides the first study of how justifications for verdicts on claims can be generated automatically based on available claim context, and how this task can be modelled jointly with veracity prediction. Expand
Explainable Fact Checking with Probabilistic Answer Set Programming
TLDR
A fact checking method that uses reference information in knowledge graphs to assess claims and explain its decisions, and experiments show that the probabilistic inference enables the efficient labeling of claims with interpretable explanations, and the quality of the results is higher than state of the art baselines. Expand
ExFaKT: A Framework for Explaining Facts over Knowledge Graphs and Text
TLDR
This work introduces ExFaKT, a framework focused on generating human-comprehensible explanations for candidate facts that effectively help humans to perform fact-checking and can also be exploited for automating this task. Expand
Fact Checking: Task definition and dataset construction
TLDR
The task of fact checking is introduced and the construction of a publicly available dataset using statements fact-checked by journalists available online is detailed, including baseline approaches for the task and the challenges that need to be addressed. Expand
Explainable Automated Fact-Checking for Public Health Claims
TLDR
The results indicate that, by training on in-domain data, gains can be made in explainable, automated fact-checking for claims which require specific expertise. Expand
A Richly Annotated Corpus for Different Tasks in Automated Fact-Checking
TLDR
A new substantially sized mixed-domain corpus with annotations of good quality for the core fact-checking tasks: document retrieval, evidence extraction, stance detection, and claim validation is presented. Expand
Toward Automated Fact-Checking: Detecting Check-worthy Factual Claims by ClaimBuster
TLDR
This paper introduces how ClaimBuster, a fact-checking platform, uses natural language processing and supervised learning to detect important factual claims in political discourses and explains the architecture and the components of the system and the evaluation of the model. Expand
TabFact: A Large-scale Dataset for Table-based Fact Verification
TLDR
A large-scale dataset with 16k Wikipedia tables as the evidence for 118k human-annotated natural language statements, which are labeled as either ENTAILED or REFUTED is constructed and two different models are designed: Table-BERT and Latent Program Algorithm (LPA). Expand
Understanding the Promise and Limits of Automated Fact-Checking
  • 2018
The last year has seen growing attention among journalists, policymakers, and technology companies to the problem of finding effective, large-scale responses to online misinformation. The furore overExpand
FEVER: a large-scale dataset for Fact Extraction and VERification
TLDR
This paper introduces a new publicly available dataset for verification against textual sources, FEVER, which consists of 185,445 claims generated by altering sentences extracted from Wikipedia and subsequently verified without knowledge of the sentence they were derived from. Expand
...
1
2
3
4
5
...