• Corpus ID: 237513602

Assisting the Human Fact-Checkers: Detecting All Previously Fact-Checked Claims in a Document

  title={Assisting the Human Fact-Checkers: Detecting All Previously Fact-Checked Claims in a Document},
  author={Shaden Shaar and Firoj Alam and Giovanni Da San Martino and Preslav Nakov},
Given the recent proliferation of false claims online, there has been a lot of manual fact-checking effort. As this is very timeconsuming, human fact-checkers can benefit from tools that can support them and make them more efficient. Here, we focus on building a system that could provide such support. Given an input document, it aims to detect all sentences that contain a claim that can be verified by some previously factchecked claims (from a given database). The output is a reranked list of… 

Figures and Tables from this paper


Towards Automated Factchecking: Developing an Annotation Schema and Benchmark for Consistent Automated Claim Detection
An annotation schema and a benchmark for automated claim detection that is more consistent across time, topics and annotators than previous approaches are developed and used to crowdsource the annotation of a dataset with sentences from UK political TV shows.
An End-to-End Multi-task Learning Model for Fact Checking
This paper presents an end-to-end multi-task learning with bi-direction attention (EMBA) model to classify the claim as “supports”, “refutes” or “not enough info” with respect to the pages retrieved and detect sentences as evidence at the same time.
Automated Fact Checking in the News Room
An automated fact checking platform which given a claim, it retrieves relevant textual evidence from a document collection, predicts whether each piece of evidence supports or refutes the claim, and returns a final verdict.
Fact Checking: Task definition and dataset construction
The task of fact checking is introduced and the construction of a publicly available dataset using statements fact-checked by journalists available online is detailed, including baseline approaches for the task and the challenges that need to be addressed.
Automated Fact Checking: Task Formulations, Methods and Future Directions
This paper surveys automated fact checking research stemming from natural language processing and related disciplines, unifying the task formulations and methodologies across papers and authors, and highlights the use of evidence as an important distinguishing factor among them cutting across task formulation and methods.
Automated Fact-Checking for Assisting Human Fact-Checkers
The available intelligent technologies that can support the human expert in the different steps of her fact-checking endeavor are surveyed, including identifying claims worth fact- checking; detecting relevant previously fact-checked claims; retrieving relevant evidence to fact-check a claim; and actually verifying a claim.
CheckThat! at CLEF 2019: Automatic Identification and Verification of Claims
We introduce the second edition of the CheckThat! Lab, part of the 2019 Cross-Language Evaluation Forum (CLEF). CheckThat! proposes two complementary tasks. Task 1: predict which claims in a
Improving Large-Scale Fact-Checking using Decomposable Attention Models and Lexical Tagging
A neural ranker using a decomposable attention model that dynamically selects sentences to achieve promising improvement in evidence retrieval F1 by 38.80%, with (x65) speedup compared to a TF-IDF method.
The CLEF-2021 CheckThat! Lab on Detecting Check-Worthy Claims, Previously Fact-Checked Claims, and Fake News
We describe the fourth edition of the CheckThat! Lab, part of the 2021 Cross-Language Evaluation Forum (CLEF). The lab evaluates technology supporting various tasks related to factuality, and it is
CheckThat! at CLEF 2020: Enabling the Automatic Identification and Verification of Claims in Social Media
We describe the third edition of the CheckThat! Lab, which is part of the 2020 Cross-Language Evaluation Forum (CLEF). CheckThat! proposes four complementary tasks and a related task from previous