• Corpus ID: 237266824

Towards Explainable Fact Checking

@article{Augenstein2021TowardsEF,
  title={Towards Explainable Fact Checking},
  author={Isabelle Augenstein},
  journal={ArXiv},
  year={2021},
  volume={abs/2108.10274}
}
The past decade has seen a substantial rise in the amount of mis- and disinformation online, from targeted disinformation campaigns to influence politics, to the unintentional spreading of misinformation about public health. This development has spurred research in the area of automatic fact checking, from approaches to detect check-worthy claims and determining the stance of tweets towards claims, to methods to determine the veracity of claims given evidence documents. These automatic methods… 

Searching for Structure in Unfalsifiable Claims

TLDR
A human-in- the-loop pipeline that uses a combination of machine and human kernels to discover the prevail- ing narratives is presented and it is shown that this pipeline outperforms recent large transformer models and state-of-the-art unsupervised topic models.

RUB-DFL at CheckThat!-2022: Transformer Models and Linguistic Features for Identifying Relevant Claims

We describe our system for the CLEF 2022 CheckThat! Lab Task 1 Subtasks A,B,C on check-worthiness estimation, verifiable factual claims detection, and harmful tweet detection in both English and

Fact Checking with Insufficient Evidence

TLDR
This work is the first to study what information FC models consider sufficient for FC by introducing a novel task and advancing it with three main contributions, finding that models are least successful in detecting missing evidence when adverbial modifiers are omitted.

Diagnostics-Guided Explanation Generation

TLDR
This work shows how to directly optimise for Faithfulness and Confidence Indication when training a model to generate sentence-level explanations, which markedly improves explanation quality, agreement with human rationales, and downstream task performance on three complex reasoning tasks.