Perverse Downstream Consequences of Debunking: Being Corrected by Another User for Posting False Political News Increases Subsequent Sharing of Low Quality, Partisan, and Toxic Content in a Twitter Field Experiment

@article{Mosleh2021PerverseDC,
  title={Perverse Downstream Consequences of Debunking: Being Corrected by Another User for Posting False Political News Increases Subsequent Sharing of Low Quality, Partisan, and Toxic Content in a Twitter Field Experiment},
  author={Mohsen Mosleh and Cameron Martel and Dean Eckles and David G. Rand},
  journal={Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems},
  year={2021}
}
A prominent approach to combating online misinformation is to debunk false content. Here we investigate downstream consequences of social corrections on users’ subsequent sharing of other content. Being corrected might make users more attentive to accuracy, thus improving their subsequent sharing. Alternatively, corrections might not improve subsequent sharing - or even backfire - by making users feel defensive, or by shifting their attention away from accuracy (e.g., towards various social… 

Figures and Tables from this paper

"This is Fake News": Characterizing the Spontaneous Debunking from Twitter Users to COVID-19 False Information

It is found that most fake tweets are left undebunked and spontaneous debunking is slower than other forms of responses, and exhibits partisanship in political topics.

The Impact of Twitter Labels on Misinformation Spread and User Engagement: Lessons from Trump’s Election Tweets

It is found that, overall, label placement did not change the propensity of users to share and engage with labeled content, but the falsity of content did, and the presence of textual overlap in labels did reduce user interactions, while stronger rebuttals reduced the toxicity in comments.

If You Have a Reliable Source, Say Something: Effects of Correction Comments on COVID-19 Misinformation

In the post-truth era, particularly during the COVID-19 pandemic, an effective correction on misinformation is necessary to promote personal and public health. To better understand the effect of

Nudging Social Media toward Accuracy

A limited-attention utility model that is based on a theory about inattention to accuracy on social media and shows how a simple nudge or prompt that shifts attention to accuracy increases the quality of news that people share is described.

Birds of a feather don’t fact-check each other: Partisanship and the evaluation of news in Twitter’s Birdwatch crowdsourced fact-checking program

Contextual features – in particular, the partisanship of the users – are far more predictive of judgments than the content of the tweets and evaluations themselves.

Exploring Lightweight Interventions at Posting Time to Reduce the Sharing of Misinformation on Social Media

This work investigates the design of lightweight interventions that nudge users to assess the accuracy of information as they share it, and finds that both providing accuracy assessment and rationale reduce the sharing of false content.

Combating Inaccurate Information on Social Media Invited Talk-Extended Abstract

There has been a great deal of concern currently about negative societal impacts of social media and the potential threats social media poses to society and democracy [5, 2]. One main area of concern

A Model of Online Misinformation*

It is established that the impact of homophily on content virality is non-monotone: homophilies reduces the broader circulation of an article, but it creates echo chambers that impose less discipline on the sharing of low-reliability content.

Adherence to Misinformation on Social Media Through Socio-Cognitive and Group-Based Processes

Previous work suggests that people's preference for different kinds of information depends on more than just accuracy. This could happen because the messages contained within different pieces of

Help Me #DebunkThis: Unpacking Individual and Community's Collaborative Work in Information Credibility Assessment

  • Lu HeChangyang He
  • Computer Science
    Proceedings of the ACM on Human-Computer Interaction
  • 2022
It is found that online information debunking rarely followed a linear and straightforward path, and community members, including the debunkers and the original posters, constantly negotiated, and interacted with each other to determine what to debunk and how to debunk.

References

SHOWING 1-10 OF 67 REFERENCES

Political Fact-Checking on Twitter: When Do Corrections Have an Effect?

Research suggests that fact checking corrections have only a limited impact on the spread of false rumors. However, research has not considered that fact-checking may be socially contingent, meaning

You’re Definitely Wrong, Maybe: Correction Style Has Minimal Effect on Corrections of Misinformation Online

How can online communication most effectively respond to misinformation posted on social media? Recent studies examining the content of corrective messages provide mixed results—several studies

Shifting attention to accuracy can reduce misinformation online.

It is found that the veracity of headlines has little effect on sharing intentions, despite having a large effect on judgments of accuracy, and that subtly shifting attention to accuracy increases the quality of news that people subsequently share.

Fighting COVID-19 misinformation on social media: Experimental evidence for a scalable accuracy nudge intervention

Evidence that people share false claims about COVID-19 partly because they simply fail to think sufficiently about whether or not content is accurate when deciding what to share is presented.

Effects of Credibility Indicators on Social Media News Sharing Intent

It is confirmed that credibility indicators can indeed decrease the propensity to share fake news, however, the impact of the indicators varied, with fact checking services being the most effective.

Do the right thing: Tone may not affect correction of misinformation on social media

An experiment conducted with 610 participants suggests that corrections to misinformation – pointing out information that is wrong or misleading and offering credible information in its place – on

Prior Exposure Increases Perceived Accuracy of Fake News

It is shown that even a single exposure increases subsequent perceptions of accuracy, both within the same session and after a week, and that social media platforms help to incubate belief in blatantly false news stories and that tagging such stories as disputed is not an effective solution to this problem.

Self-reported willingness to share political news articles in online surveys correlates with actual sharing on Twitter

It is found that the same news headlines that were more likely to be hypothetically shared on MTurk were also shared more frequently by Twitter users, r = .44, suggesting that self-reported sharing intentions collected in online surveys are likely to provide some meaningful insight into what content would actually be shared on social media.

Shared Partisanship Dramatically Increases Social Tie Formation in a Twitter Field Experiment

Americans are much more likely to be socially connected to copartisans, both in daily life and on social media. However, this observation does not necessarily mean that shared partisanship per se

Less than you think: Prevalence and predictors of fake news dissemination on Facebook

It is found that sharing this content was a relatively rare activity, and Conservatives were more likely to share articles from fake news domains, which in 2016 were largely pro-Trump in orientation, than liberals or moderates.
...