• Corpus ID: 202565852

Multilingual Multimodal Digital Deception Detection and Disinformation Spread across Social Platforms

  title={Multilingual Multimodal Digital Deception Detection and Disinformation Spread across Social Platforms},
  author={Maria Glenski and Ellyn Ayton and Josh Mendoza and Svitlana Volkova},
Our main contribution in this work is novel results of multilingual models that go beyond typical applications of rumor or misinformation detection in English social news content to identify fine-grained classes of digital deception across multiple languages (e.g. Russian, Spanish, etc.). In addition, we present models for multimodal deception detection from images and text and discuss the limitations of image only and text only models. Finally, we elaborate on the ongoing work on measuring… 

Figures from this paper

A Survey on Multimodal Disinformation Detection

A survey on the state of the art on multimodal disinformation detection covering various combinations of modalities: text, images, speech, video, social media network structure, and temporal information is offered.

Multi-modal Fake News Detection

This chapter presents a thorough survey of the recent approaches to detect multi-modal fake news spreading on various social media platforms and describes the proposed methods by categorizing them through a taxonomy.

Detection of Propaganda Techniques in Visuo-Lingual Metaphor in Memes

To detect propaganda in Internet memes, a multimodal deep learning fusion system is proposed that fuses the text and image feature representations and outperforms individual models based solely on either text or image modalities.

Detecting Propaganda Techniques in Memes

This work creates and releases a new corpus of 950 memes, carefully annotated with 22 propaganda techniques, which can appear in the text, in the image, or in both, and shows that understanding both modalities together is essential for detecting these techniques.

Proactive Discovery of Fake News Domains from Real-Time Social Media Feeds

An automatic discovery system that proactively surfaces fake news domains before they are flagged by humans that will expedite fact-checking process and can be a powerful weapon in the toolbox to combat misinformation.

Multiple social platforms reveal actionable signals for software vulnerability awareness: A study of GitHub, Twitter and Reddit

This work is the first to evaluate and contrast how discussions about software vulnerabilities spread on three social platforms—Twitter, GitHub, and Reddit and finds that most discussions start on GitHub not only before Twitter and Reddit, but even before a vulnerability is officially published.

Propaganda Techniques Detection in Low-Resource Memes with Multi-Modal Prompt Tuning

A prompt-based multi-modal fine-tuning schema is designed to incorporate the vi-sual clues into the language model to detect the types of pro-paganda techniques used in memes with a focus on both tex-tual and image modalities.

For Whom the Tale’s Told: Towards a Multidimensional Model of Targeted Narrative Persuasion in Information Operations

This paper integrates interdisciplinary theoretical concepts to provide a foundation for a model of narrative persuasion to guide research on social media information operations and shows how cognitive and computational sciences can be blended in support of fundamental and applied research in information operations.

SemEval-2021 Task 6: Detection of Persuasion Techniques in Texts and Images

SemEval-2021 task 6 on Detection of Persuasion Techniques in Texts and Images focused on memes and had three subtasks: detecting the techniques in the text, detecting the text spans where the techniques are used, and detecting Techniques in the entire meme.



Experiments in Open Domain Deception Detection

The findings show that while deception detection can be performed in short texts even in the absence of a predetermined domain, gender and age prediction in deceptive texts is a challenging task.

Misleading or Falsification: Inferring Deceptive Strategies and Types in Online News and Social Media

This study is the first to gain deeper insights into writers' intent behind digital misinformation by analyzing psycholinguistic signals: moral foundations and connotations extracted from different types of deceptive news ranging from strategic disinformation to propaganda and hoaxes.

Identifying and Understanding User Reactions to Deceptive and Trusted Social News Sources

A model to classify user reactions into one of nine types, such as answer, elaboration, and question, etc, is developed and it is shown that there are significant differences in the speed and the type of reactions between trusted and deceptive news sources on Twitter, but far smaller differences on Reddit.

How Humans Versus Bots React to Deceptive and Trusted News Sources: A Case Study of Active Users

This work identifies the differences in how social media accounts identified as bots react to news sources of varying credibility, regardless of the veracity of the content those sources have shared.

Propagation From Deceptive News Sources Who Shares, How Much, How Evenly, and How Quickly?

This large-scale study of news in social media examines how evenly, how many, how quickly, and which users propagate content from various types of news sources on Twitter, and identifies several key differences in propagation behavior from trusted versus suspicious news sources.

Rumor has it: Identifying Misinformation in Microblogs

This paper addresses the problem of rumor detection in microblogs and explores the effectiveness of 3 categories of features: content- based, network-based, and microblog-specific memes for correctly identifying rumors, and believes that its dataset is the first large-scale dataset on rumor detection.

“Liar, Liar Pants on Fire”: A New Benchmark Dataset for Fake News Detection

This paper presents LIAR: a new, publicly available dataset for fake news detection, and designs a novel, hybrid convolutional neural network to integrate meta-data with text to improve a text-only deep learning model.

Separating Facts from Fiction: Linguistic Models to Classify Suspicious and Trusted News Posts on Twitter

This work builds predictive models to classify 130 thousand news posts as suspicious or verified, and predict four sub-types of suspicious news – satire, hoaxes, clickbait and propaganda, and shows that neural network models trained on tweet content and social network interactions outperform lexical models.

Rumors, False Flags, and Digital Vigilantes: Misinformation on Twitter after the 2013 Boston Marathon Bombing

This exploratory research examines three rumors, later demonstrated to be false, that circulated on Twitter in the aftermath of the Boston Marathon bombings and suggests that corrections to the misinformation emerge but are muted compared with the propagation of the misinformation.

Truth of Varying Shades: Analyzing Language in Fake News and Political Fact-Checking

Experiments show that while media fact-checking remains to be an open research question, stylistic cues can help determine the truthfulness of text.