Corpus ID: 237513602

Assisting the Human Fact-Checkers: Detecting All Previously Fact-Checked Claims in a Document

@article{Shaar2021AssistingTH,
  title={Assisting the Human Fact-Checkers: Detecting All Previously Fact-Checked Claims in a Document},
  author={Shaden Shaar and Firoj Alam and Giovanni Da San Martino and Preslav Nakov},
  journal={ArXiv},
  year={2021},
  volume={abs/2109.07410}
}
Given the recent proliferation of false claims online, there has been a lot of manual fact-checking effort. As this is very timeconsuming, human fact-checkers can benefit from tools that can support them and make them more efficient. Here, we focus on building a system that could provide such support. Given an input document, it aims to detect all sentences that contain a claim that can be verified by some previously factchecked claims (from a given database). The output is a reranked list of… Expand

Figures and Tables from this paper

References

SHOWING 1-10 OF 33 REFERENCES
Towards Automated Factchecking: Developing an Annotation Schema and Benchmark for Consistent Automated Claim Detection
TLDR
An annotation schema and a benchmark for automated claim detection that is more consistent across time, topics and annotators than previous approaches are developed and used to crowdsource the annotation of a dataset with sentences from UK political TV shows. Expand
An End-to-End Multi-task Learning Model for Fact Checking
TLDR
This paper presents an end-to-end multi-task learning with bi-direction attention (EMBA) model to classify the claim as “supports”, “refutes” or “not enough info” with respect to the pages retrieved and detect sentences as evidence at the same time. Expand
Automated Fact Checking in the News Room
TLDR
An automated fact checking platform which given a claim, it retrieves relevant textual evidence from a document collection, predicts whether each piece of evidence supports or refutes the claim, and returns a final verdict. Expand
Fact Checking: Task definition and dataset construction
TLDR
The task of fact checking is introduced and the construction of a publicly available dataset using statements fact-checked by journalists available online is detailed, including baseline approaches for the task and the challenges that need to be addressed. Expand
Automated Fact Checking: Task Formulations, Methods and Future Directions
TLDR
This paper surveys automated fact checking research stemming from natural language processing and related disciplines, unifying the task formulations and methodologies across papers and authors, and highlights the use of evidence as an important distinguishing factor among them cutting across task formulation and methods. Expand
Automated Fact-Checking for Assisting Human Fact-Checkers
TLDR
The available intelligent technologies that can support the human expert in the different steps of her fact-checking endeavor are surveyed, including identifying claims worth fact- checking; detecting relevant previously fact-checked claims; retrieving relevant evidence to fact-check a claim; and actually verifying a claim. Expand
CheckThat! at CLEF 2019: Automatic Identification and Verification of Claims
We introduce the second edition of the CheckThat! Lab, part of the 2019 Cross-Language Evaluation Forum (CLEF). CheckThat! proposes two complementary tasks. Task 1: predict which claims in aExpand
Improving Large-Scale Fact-Checking using Decomposable Attention Models and Lexical Tagging
TLDR
A neural ranker using a decomposable attention model that dynamically selects sentences to achieve promising improvement in evidence retrieval F1 by 38.80%, with (x65) speedup compared to a TF-IDF method. Expand
The CLEF-2021 CheckThat! Lab on Detecting Check-Worthy Claims, Previously Fact-Checked Claims, and Fake News
We describe the fourth edition of the CheckThat! Lab, part of the 2021 Cross-Language Evaluation Forum (CLEF). The lab evaluates technology supporting various tasks related to factuality, and it isExpand
CheckThat! at CLEF 2020: Enabling the Automatic Identification and Verification of Claims in Social Media
We describe the third edition of the CheckThat! Lab, which is part of the 2020 Cross-Language Evaluation Forum (CLEF). CheckThat! proposes four complementary tasks and a related task from previousExpand
...
1
2
3
4
...