The COVID-19 Infodemic: Can the Crowd Judge Recent Misinformation Objectively?

  title={The COVID-19 Infodemic: Can the Crowd Judge Recent Misinformation Objectively?},
  author={Kevin Roitero and Michael Soprano and Beatrice Portelli and Damiano Spina and Vincenzo Della Mea and Giuseppe Serra and Stefano Mizzaro and Gianluca Demartini},
  journal={Proceedings of the 29th ACM International Conference on Information \& Knowledge Management},
Misinformation is an ever increasing problem that is difficult to solve for the research community and has a negative impact on the society at large. Very recently, the problem has been addressed with a crowdsourcing-based approach to scale up labeling efforts: to assess the truthfulness of a statement, instead of relying on a few experts, a crowd of (non-expert) judges is exploited. We follow the same approach to study whether crowdsourcing is an effective and reliable method to assess… Expand
Can the crowd judge truthfulness? A longitudinal study on recent misinformation about COVID-19
This work studies whether crowdsourcing is an effective and reliable method to assess truthfulness during a pandemic, targeting statements related to COVID-19, thus addressing (mis)information that is both related to a sensitive and personal issue and very recent as compared to when the judgment is done. Expand
The Role of the Crowd in Countering Misinformation: A Case Study of the COVID-19 Infodemic
Insight is provided into how misinformation is organically countered in social platforms by some of their users and the role they play in amplifying professional fact checks, which could lead to development of tools and mechanisms that can empower concerned citizens in combating misinformation. Expand
The Many Dimensions of Truthfulness: Crowdsourcing Misinformation Assessments on a Multidimensional Scale
A comprehensive analysis of crowdsourced judgments shows that the crowdsourced assessments are reliable when compared to an expert-provided gold standard; the proposed dimensions of truthfulness capture independent pieces of information; and the crowdsourcing task can be easily learned by the workers. Expand
SAMS: Human-in-the-loop Approach to Combat the Sharing of Digital Misinformation
The SAMS-HITL approach goes one step further than the traditional human-in-the-loop models in that it helps raising awareness about digital misinformation by allowing users to become self fact-checkers. Expand
FibVID: Comprehensive fake news diffusion dataset during the COVID-19 period
  • Jisu Kim, Ji A Aum, Sang Eun Lee, Yeonju Jang, Eunil Park, Daejin Choi
  • Medicine
  • Telematics Informatics
  • 2021
A valuable dataset called FibVID (Fake news information-broadcasting dataset of CO VID-19), which addresses COVID-19 and non-COVID news from three key angles and helps to uncover propagation patterns of news items and themes related to identifying their authenticity. Expand
Human-in-the-loop Artificial Intelligence for Fighting Online Misinformation: Challenges and Opportunities
The rise of online misinformation is posing a threat to the functioning of democratic processes. The ability to algorithmically spread false information through online social networks together withExpand
Exploring the effect of social media and spatial characteristics during the COVID-19 pandemic in China
This paper investigates how the disease and information co-evolve in the population of China during the period when the disease was widely spread in China, i.e., from January 25th to March 24th, 2020 and finds that disease is more geo-localized compared to information. Expand
Get Your Vitamin C! Robust Fact Verification with Contrastive Evidence
VitaminC is presented, a benchmark infused with challenging cases that require fact verification models to discern and adjust to slight factual changes, and it is shown that training using this design increases robustness—improving accuracy by 10% on adversarial fact verification and 6% on adversary natural language inference (NLI). Expand
Watch ’n’ Check: Towards a Social Media Monitoring Tool to Assist Fact-Checking Experts
We present an ongoing collaboration between computer science researchers and fact-checking experts in a broad-cast corporation to develop Watch ’n’ Check, a social media monitoring tool that assistsExpand
Special Issue on Human Powered AI Systems
Computational division of labor addresses the design and analysis of algorithms for division of labor problems and will be one of the key issues in Future of Work. The problems deal with interactionsExpand


Fighting COVID-19 misinformation on social media: Experimental evidence for a scalable accuracy nudge intervention
Evidence that people share false claims about COVID-19 partly because they simply fail to think sufficiently about whether or not the content is accurate when deciding what to share on social media is presented. Expand
Crowdsourcing Truthfulness: The Impact of Judgment Scale and Assessor Bias
This work looks at how experts and non-expert assess truthfulness of content by focusing on the effect of the adopted judgment scale and of assessors’ own bias on the judgments they perform. Expand
Considering Assessor Agreement in IR Evaluation
This paper addresses the issue of agreement between relevance assessors, and the definition of an agreement-aware effectiveness metric that does not discard information about multiple judgments for the same document as it typically happens in a crowdsourcing setting. Expand
Tweet, but verify: epistemic study of information verification on Twitter
  • A. Zubiaga, Heng Ji
  • Psychology, Computer Science
  • Social Network Analysis and Mining
  • 2014
This study surveys users on credibility perceptions regarding witness pictures posted on Twitter related to Hurricane Sandy, and unveils insight about tweet presentation, as well as features that users should look at when assessing the veracity of tweets in the context of fast-paced events. Expand
The Impact of Task Abandonment in Crowdsourcing
This paper conducts an investigation of the phenomenon of task abandonment, the act of workers previewing or beginning a task and deciding not to complete it, and shows how task abandonment may have strong implications on the use of collected data. Expand
Let's Agree to Disagree: Fixing Agreement Measures for Crowdsourcing
This paper identifies the main limits of the existing agreement measures in the crowdsourcing context, both by means of toy examples as well as with real-world crowdsourcing data, and proposes a novel agreement measure based on probabilistic parameter estimation which overcomes such limits. Expand
All Those Wasted Hours: On Task Abandonment in Crowdsourcing
This paper conducts the first investigation into the phenomenon of task abandonment, the act of workers previewing or beginning a task and deciding not to complete it and shows how task abandonment may have strong implications on the use of collected data (for example, on the evaluation of IR systems). Expand
"Liar, Liar Pants on Fire": A New Benchmark Dataset for Fake News Detection
This paper presents liar: a new, publicly available dataset for fake news detection, and designs a novel, hybrid convolutional neural network to integrate meta-data with text to improve a text-only deep learning model. Expand
Statistical quality estimation for general crowdsourcing tasks
Experiments using several general crowdsourcing tasks show that the proposed unsupervised statistical quality estimation method outperforms popular vote aggregation methods, which implies that the method can deliver high quality results with lower costs. Expand
On Fine-Grained Relevance Scales
This work proposes and experimentally evaluates a bounded and fine-grained relevance scale having many of the advantages and dealing with some of the issues of Magnitude Estimation (ME), and shows that S100 maintains the flexibility of unbounded scales like ME in providing assessors with ample choice when judging document relevance. Expand