• Corpus ID: 237298477

UPV at CheckThat! 2021: Mitigating Cultural Differences for Identifying Multilingual Check-worthy Claims

@article{BarisSchlicht2021UPVAC,
  title={UPV at CheckThat! 2021: Mitigating Cultural Differences for Identifying Multilingual Check-worthy Claims},
  author={I. Baris Schlicht and Angel Felipe Magnoss{\~a}o de Paula and Paolo Rosso},
  journal={ArXiv},
  year={2021},
  volume={abs/2109.09232},
  pages={2021}
}
Identifying check-worthy claims is often the first step of automated fact-checking systems. Tackling this task in a multilingual setting has been understudied. Encoding inputs with multilingual text representations could be one approach to solve the multilingual check-worthiness detection. However, this approach could suffer if cultural bias exists within the communities on determining what is check-worthy. In this paper, we propose a language identification task as an auxiliary task to… 
3 Citations

Figures and Tables from this paper

The CLEF-2021 CheckThat! Lab on Detecting Check-Worthy Claims, Previously Fact-Checked Claims, and Fake News
We describe the fourth edition of the CheckThat! Lab, part of the 2021 Cross-Language Evaluation Forum (CLEF). The lab evaluates technology supporting various tasks related to factuality, and it is
Overview of the CLEF-2021 CheckThat! Lab on Detecting Check-Worthy Claims, Previously Fact-Checked Claims, and Fake News
We describe the fourth edition of the CheckThat! Lab, part of the 2021 Conference and Labs of the Evaluation Forum (CLEF). The lab evaluates technology supporting tasks related to factuality, and
Overview of the CLEF-2021 CheckThat! Lab Task 1 on Check-Worthiness Estimation in Tweets and Political Debates
We present an overview of Task 1 of the fourth edition of the CheckThat! Lab, part of the 2021 Conference and Labs of the Evaluation Forum (CLEF). The task asks to predict which posts in a Twitter

References

SHOWING 1-10 OF 36 REFERENCES
Overview of the CLEF-2021 CheckThat! Lab on Detecting Check-Worthy Claims, Previously Fact-Checked Claims, and Fake News
We describe the fourth edition of the CheckThat! Lab, part of the 2021 Conference and Labs of the Evaluation Forum (CLEF). The lab evaluates technology supporting tasks related to factuality, and
bigIR at CheckThat! 2020: Multilingual BERT for Ranking Arabic Tweets by Check-worthiness
TLDR
BigIR group at Qatar University in CheckThat! lab at CLEF participated only in Arabic Task 1 that focuses on detecting checkworthy tweets on a given topic, and submitted four runs using both traditional classification models and a pre-trained language model: multilingual BERT (mBERT).
Overview of CheckThat 2020: Automatic Identification and Verification of Claims in Social Media
We present an overview of the third edition of the CheckThat! Lab at CLEF 2020. The lab featured five tasks in two different languages: English and Arabic. The first four tasks compose the full
Overview of the CLEF-2019 CheckThat! Lab: Automatic Identification and Verification of Claims. Task 1: Check-Worthiness
We present an overview of the 2nd edition of the CheckThat! Lab, part of CLEF 2019, with focus on Task 1: Check-Worthiness in political debates. The task asks to predict which claims in a political
Overview of the CLEF-2018 CheckThat! Lab on Automatic Identification and Verification of Political Claims. Task 1: Check-Worthiness
We present an overview of the CLEF-2018 CheckThat! Lab on Automatic Identification and Verification of Political Claims, with focus on Task 1: Check-Worthiness. The task asks to predict which claims
A Context-Aware Approach for Detecting Worth-Checking Claims in Political Debates
TLDR
A new corpus of political debates is created, containing statements that have been fact-checked by nine reputable sources, and machine learning models are trained to predict which claims in a given document are most worthy and should be prioritized for fact-checking.
BanFakeNews: A Dataset for Detecting Fake News in Bangla
TLDR
An annotated dataset of ≈ 50K news is proposed that can be used for building automated fake news detection systems for a low resource language like Bangla and a benchmark system with state of the art NLP techniques to identify Bangla fake news is developed.
Detecting Check-worthy Factual Claims in Presidential Debates
TLDR
This work prepared a U.S. presidential debate dataset and built classification models to distinguish check-worthy factual claims from non-factual claims and unimportant factual claims, and identified the most-effective features based on their impact on the classification models' accuracy.
Automated Fact Checking: Task Formulations, Methods and Future Directions
TLDR
This paper surveys automated fact checking research stemming from natural language processing and related disciplines, unifying the task formulations and methodologies across papers and authors, and highlights the use of evidence as an important distinguishing factor among them cutting across task formulation and methods.
On the Role of Images for Analyzing Claims in Social Media
TLDR
This paper investigates state-of-the-art models for images, text, and multimodal information for four different datasets across two languages to understand the role of images in the task of claim and conspiracy detection.
...
1
2
3
4
...