Automated Fact-Checking for Assisting Human Fact-Checkers

@inproceedings{Nakov2021AutomatedFF,
  title={Automated Fact-Checking for Assisting Human Fact-Checkers},
  author={Preslav Nakov and David Corney and Maram Hasanain and Firoj Alam and Tamer Elsayed and Alberto Barr'on-Cedeno and Paolo Papotti and Shaden Shaar and Giovanni Da San Martino},
  booktitle={IJCAI},
  year={2021}
}
The reporting and the analysis of current events around the globe has expanded from professional, editor-lead journalism all the way to citizen journalism. Nowadays, politicians and other key players enjoy direct access to their audiences through social media, bypassing the filters of official cables or traditional media. However, the multiple advantages of free speech and direct communication are dimmed by the misuse of media to spread inaccurate or misleading claims. These phenomena have led… 

Figures from this paper

SciClops: Detecting and Contextualizing Scientific Claims for Assisting Manual Fact-Checking
TLDR
Extensive experiments show that SciClops effectively assists non-expert fact-checkers in the verification of complex scientific claims, outperforming commercial fact-checking systems.
Assisting the Human Fact-Checkers: Detecting All Previously Fact-Checked Claims in a Document
TLDR
This work creates a new manually annotated dataset for the claim retrieval task, and proposes suitable evaluation measures and demonstrates the importance of modeling text similarity and stance, while also taking into account the veracity of the retrieved previously fact-checked claims.
Fact-Checking Statistical Claims with Tables
TLDR
This work formalizes the problem by providing a general definition that is applicable to all systems and that is agnostic to their assumptions, and defines general dimensions to characterize different prominent systems in terms of assumptions and features.
Scalable Fact-checking with Human-in-the-Loop
TLDR
A new pipeline is proposed – grouping similar messages and summarizing them into aggregated claims, showing the potential to speed up the fact-checking process by organizing and selecting representative claims from massive disorganized and redundant messages.
A Survey on Automated Fact-Checking
TLDR
This paper surveys automated fact-checking stemming from natural language processing, and presents an overview of existing datasets and models, aiming to unify the various definitions given and identify common concepts.
Explainable Fact-checking through Question Answering
TLDR
This work addresses fact-checking explainability through question answering, and proposes an answer comparison model with an attention mechanism attached to each question that can achieve state-of-the-art performance while providing reasonable explainable capabilities.
WhatTheWikiFact: Fact-Checking Claims Against Wikipedia
TLDR
WhatTheWikiFact, a system for automatic claim verification using Wikipedia, can predict the veracity of an input claim, and it further shows the evidence it has retrieved as part of the verification process.
The Case for Claim Difficulty Assessment in Automatic Fact Checking
TLDR
It is argued that prediction of claim difficulty is a missing component of today's automated fact checking architectures, and it is described how this difficulty prediction task might be split into a set of distinct subtasks.
FEVEROUS: Fact Extraction and VERification Over Unstructured and Structured information
TLDR
This paper introduces a novel dataset and benchmark, Fact Extraction and VERification Over Unstructured and Structured information (FEVEROUS), which consists of 87,026 verified claims and develops a baseline for verifying claims against text and tables which predicts both the correct evidence and verdict for 18% of the claims.
...
...

References

SHOWING 1-10 OF 100 REFERENCES
Automated Fact Checking in the News Room
TLDR
An automated fact checking platform which given a claim, it retrieves relevant textual evidence from a document collection, predicts whether each piece of evidence supports or refutes the claim, and returns a final verdict.
Generating Fact Checking Briefs
TLDR
This work investigates how to increase the accuracy and efficiency of fact checking by providing information about the claim before performing the check, in the form of natural language briefs, and develops QABriefer, a model that generates a set of questions conditioned on the claim, searches the web for evidence, and generates answers.
Toward Automated Fact-Checking: Detecting Check-worthy Factual Claims by ClaimBuster
TLDR
This paper introduces how ClaimBuster, a fact-checking platform, uses natural language processing and supervised learning to detect important factual claims in political discourses and explains the architecture and the components of the system and the evaluation of the model.
Automated Fact Checking: Task Formulations, Methods and Future Directions
TLDR
This paper surveys automated fact checking research stemming from natural language processing and related disciplines, unifying the task formulations and methodologies across papers and authors, and highlights the use of evidence as an important distinguishing factor among them cutting across task formulation and methods.
The Role of Context in Detecting Previously Fact-Checked Claims
TLDR
This work focuses on claims made in a political debate and studies the impact of modeling the context of the claim, finding that modeling the source-side context is most important, and can yield 10+ points of absolute improvement over a state-of-the-art model.
That is a Known Lie: Detecting Previously Fact-Checked Claims
TLDR
Learning-to-rank experiments that demonstrate sizable improvements over state-of-the-art retrieval and textual similarity approaches are presented that are largely ignored by the research community so far.
The Rise of Guardians: Fact-checking URL Recommendation to Combat Fake News
TLDR
It is found that the guardians usually took less than one day to reply to claims in online conversations and took another day to spread verified information to hundreds of millions of followers, and the proposed recommendation model outperformed four state-of-the-art models by 11%~33%.
Fact Checking: Task definition and dataset construction
TLDR
The task of fact checking is introduced and the construction of a publicly available dataset using statements fact-checked by journalists available online is detailed, including baseline approaches for the task and the challenges that need to be addressed.
Towards Automated Factchecking: Developing an Annotation Schema and Benchmark for Consistent Automated Claim Detection
TLDR
An annotation schema and a benchmark for automated claim detection that is more consistent across time, topics and annotators than previous approaches are developed and used to crowdsource the annotation of a dataset with sentences from UK political TV shows.
Explainable Automated Fact-Checking: A Survey
TLDR
This survey focuses on the explanation functionality – that is fact-checking systems providing reasons for their predictions, and summarizes existing methods for explaining the predictions of fact- checking systems and explores trends in this topic.
...
...