Corpus ID: 235391052

FEVEROUS: Fact Extraction and VERification Over Unstructured and Structured information

@article{Aly2021FEVEROUSFE,
  title={FEVEROUS: Fact Extraction and VERification Over Unstructured and Structured information},
  author={Rami Aly and Zhijiang Guo and M. Schlichtkrull and James Thorne and Andreas Vlachos and Christos Christodoulopoulos and Oana Cocarascu and Arpit Mittal},
  journal={ArXiv},
  year={2021},
  volume={abs/2106.05707}
}
Fact verification has attracted a lot of attention in the machine learning and natural language processing communities, as it is one of the key methods for detecting misinformation. Existing large-scale benchmarks for this task have focused mostly on textual sources, i.e. unstructured information, and thus ignored the wealth of information available in structured formats, such as tables. In this paper we introduce a novel dataset and benchmark, Fact Extraction and VERification Over Unstructured… Expand
The Fact Extraction and VERification Over Unstructured and Structured information (FEVEROUS) Shared Task
The Fact Extraction and VERification Over Unstructured and Structured information (FEVEROUS) shared task, asks participating systems to determine whether human-authored claims are Supported orExpand
FaBULOUS: Fact-checking Based on Understanding of Language Over Unstructured and Structured information
As part of the FEVEROUS shared task, we developed a robust and finely tuned architecture to handle the joint retrieval and entailment on text data as well as structured data like tables. We proposedExpand
Neural Re-rankers for Evidence Retrieval in the FEVEROUS Task
Computational fact-checking has gained a lot of traction in the machine learning and natural language processing communities. A plethora of solutions have been developed, but methods which leverageExpand
Graph Reasoning with Context-Aware Linearization for Interpretable Fact Extraction and Verification
This paper presents an end-to-end system for fact extraction and verification using textual and tabular evidence, the performance of which we demonstrate on the FEVEROUS dataset. We experiment withExpand
A Fact Checking and Verification System for FEVEROUS Using a Zero-Shot Learning Approach
In this paper, we propose a novel fact checking and verification system to check claims against Wikipedia content. Our system retrieves relevant Wikipedia pages using Anserini, uses BERT-large-casedExpand
Verdict Inference with Claim and Retrieved Elements Using RoBERTa
Automatic fact verification has attracted recent research attention as the increasing dissemination of disinformation on social media platforms. The FEVEROUS shared task introduces a benchmark forExpand
Automated Fact-Checking: A Survey
TLDR
This paper reviews relevant research on automated fact-checking covering both the claim detection and claim validation components. Expand
DialFact: A Benchmark for Fact-Checking in Dialogue
TLDR
DIALFACT, a testing benchmark dataset of 22,245 annotated conversational claims, paired with pieces of evidence from Wikipedia, is constructed and a simple yet data-efficient solution to effectively improve fact-checking performance in dialogue is proposed. Expand
Combining sentence and table evidence to predict veracity of factual claims using TaPaS and RoBERTa
This paper describes a method for retrieving evidence and predicting the veracity of factual claims, on the FEVEROUS dataset. The evidence consists of both sentences and table cells. The proposedExpand
TruthfulQA: Measuring How Models Mimic Human Falsehoods
TLDR
It is suggested that scaling up models alone is less promising for improving truthfulness than fine-tuning using training objectives other than imitation of text from the web. Expand
...
1
2
...

References

SHOWING 1-10 OF 72 REFERENCES
FEVER: a Large-scale Dataset for Fact Extraction and VERification
TLDR
This paper introduces a new publicly available dataset for verification against textual sources, FEVER, which consists of 185,445 claims generated by altering sentences extracted from Wikipedia and subsequently verified without knowledge of the sentence they were derived from. Expand
The Fact Extraction and VERification Over Unstructured and Structured information (FEVEROUS) Shared Task
The Fact Extraction and VERification Over Unstructured and Structured information (FEVEROUS) shared task, asks participating systems to determine whether human-authored claims are Supported orExpand
TabFact: A Large-scale Dataset for Table-based Fact Verification
TLDR
A large-scale dataset with 16k Wikipedia tables as the evidence for 118k human-annotated natural language statements, which are labeled as either ENTAILED or REFUTED is constructed and two different models are designed: Table-BERT and Latent Program Algorithm (LPA). Expand
Joint Verification and Reranking for Open Fact Checking Over Tables
TLDR
This paper investigates verification over structured data in the open-domain setting, introducing a joint reranking-and-verification model which fuses evidence documents in the verification component and achieves performance comparable to the closed-domain state of the art on the TabFact dataset. Expand
Automated Fact Checking: Task Formulations, Methods and Future Directions
TLDR
This paper surveys automated fact checking research stemming from natural language processing and related disciplines, unifying the task formulations and methodologies across papers and authors, and highlights the use of evidence as an important distinguishing factor among them cutting across task formulation and methods. Expand
INFOTABS: Inference on Tables as Semi-structured Data
In this paper, we observe that semi-structured tabulated text is ubiquitous; understanding them requires not only comprehending the meaning of text fragments, but also implicit relationships betweenExpand
HoVer: A Dataset for Many-Hop Fact Extraction And Claim Verification
TLDR
It is shown that the performance of an existing state-of-the-art semantic-matching model degrades significantly on this dataset as the number of reasoning hops increases, hence demonstrating the necessity of many-hop reasoning to achieve strong results. Expand
Where is Your Evidence: Improving Fact-checking by Justification Modeling
TLDR
The LIAR dataset is extended by automatically extracting the justification from the fact-checking article used by humans to label a given claim, and it is shown that modeling the extracted justification in conjunction with the claim provides a significant improvement regardless of the machine learning model used. Expand
Integrating Stance Detection and Fact Checking in a Unified Corpus
TLDR
This paper supports the interdependencies between fact checking, document retrieval, source credibility, stance detection and rationale extraction as annotations in the same corpus and implements this setup on an Arabic fact checking corpus, the first of its kind. Expand
A Richly Annotated Corpus for Different Tasks in Automated Fact-Checking
TLDR
A new substantially sized mixed-domain corpus with annotations of good quality for the core fact-checking tasks: document retrieval, evidence extraction, stance detection, and claim validation is presented. Expand
...
1
2
3
4
5
...