FaVIQ: FAct Verification from Information-seeking Questions

@inproceedings{Park2022FaVIQFV,
  title={FaVIQ: FAct Verification from Information-seeking Questions},
  author={Jungsoo Park and Sewon Min and Jaewoo Kang and Luke Zettlemoyer and Hannaneh Hajishirzi},
  booktitle={ACL},
  year={2022}
}
Despite significant interest in developing general purpose fact checking models, it is challenging to construct a large-scale fact verification dataset with realistic real-world claims. Existing claims are either authored by crowdworkers, thereby introducing subtle biases thatare difficult to control for, or manually verified by professional fact checkers, causing them to be expensive and limited in scale. In this paper, we construct a large-scale challenging fact verification dataset called… 

Figures and Tables from this paper

CREAK: A Dataset for Commonsense Reasoning over Entity Knowledge

TLDR
This work introduces CREAK, a testbed for commonsense reasoning about entity knowledge, bridging fact-checking about entities (Harry Potter is a wizard and is skilled at riding a broomstick) with commonsense inferences (if you’re good at a skill you can teach others how to do it).

Stretching Sentence-pair NLI Models to Reason over Long Documents and Clusters

TLDR
This work further explores the direct zero-shot applicability of NLI models to real applications, beyond the sentence-pair setting they were trained on, and develops new aggregation methods to allow operating over full documents, reaching state-of-the-art performance on the ContractNLI dataset.

Evidentiality-guided Generation for Knowledge-Intensive NLP Tasks

TLDR
This work introduces a method to incorporate evidentiality of passages—whether a passage contains correct evidence to support the output—into training the generator, and introduces a multi-task learning framework to jointly generate the final output and predict the evidentialsity of each passage.

Claim-Dissector: An Interpretable Fact-Checking System with Joint Re-ranking and Veracity Prediction

We present Claim-Dissector: a novel latent variable model for fact-checking and fact-analysis, which given a claim and a set of retrieved provenances allows learning jointly: (i) what are the

References

SHOWING 1-10 OF 47 REFERENCES

Automated Fact-Checking of Claims from Wikipedia

Automated fact checking is becoming increasingly vital as both truthful and fallacious information accumulate online. Research on fact checking has benefited from large-scale datasets such as FEVER

Evidence-based Verification for Real World Information Needs

TLDR
A novel claim verification dataset with instances derived from search-engine queries, yielding 10,987 claims annotated with evidence that represent real-world information needs that enables systems to use evidence extraction to summarize a rationale for an end-user while maintaining the accuracy when predicting a claim's veracity.

FEVER: a Large-scale Dataset for Fact Extraction and VERification

TLDR
This paper introduces a new publicly available dataset for verification against textual sources, FEVER, which consists of 185,445 claims generated by altering sentences extracted from Wikipedia and subsequently verified without knowledge of the sentence they were derived from.

Zero-shot Fact Verification by Claim Generation

TLDR
QACG, a framework for training a robust fact verification model by using automatically generated claims that can be supported, refuted, or unverifiable from evidence from Wikipedia, is developed.

Fact Checking: Task definition and dataset construction

TLDR
The task of fact checking is introduced and the construction of a publicly available dataset using statements fact-checked by journalists available online is detailed, including baseline approaches for the task and the challenges that need to be addressed.

Towards Debiasing Fact Verification Models

TLDR
It is shown that in the popular FEVER dataset this might not necessarily be the case, and a regularization method is introduced which alleviates the effect of bias in the training data, obtaining improvements on the newly created test set.

Evidence-based Factual Error Correction

TLDR
This paper demonstrates that it is feasible to train factual error correction systems from existing fact checking datasets which only contain labeled claims accompanied by evidence, but not the correction, and achieves better results than existing work which used a pointer copy network and gold evidence.

HoVer: A Dataset for Many-Hop Fact Extraction And Claim Verification

TLDR
It is shown that the performance of an existing state-of-the-art semantic-matching model degrades significantly on this dataset as the number of reasoning hops increases, hence demonstrating the necessity of many-hop reasoning to achieve strong results.

A Richly Annotated Corpus for Different Tasks in Automated Fact-Checking

TLDR
A new substantially sized mixed-domain corpus with annotations of good quality for the core fact-checking tasks: document retrieval, evidence extraction, stance detection, and claim validation is presented.

MultiFC: A Real-World Multi-Domain Dataset for Evidence-Based Fact Checking of Claims

TLDR
An in-depth analysis of the largest publicly available dataset of naturally occurring factual claims, collected from 26 fact checking websites in English, paired with textual sources and rich metadata, and labelled for veracity by human expert journalists is presented.