Two Truths and a Lie: Exploring Soft Moderation of COVID-19 Misinformation with Amazon Alexa

@article{Gover2021TwoTA,
  title={Two Truths and a Lie: Exploring Soft Moderation of COVID-19 Misinformation with Amazon Alexa},
  author={Donald Gover and Filipo Sharevski},
  journal={The 16th International Conference on Availability, Reliability and Security},
  year={2021}
}
  • Donald Gover, Filipo Sharevski
  • Published 1 April 2021
  • Computer Science
  • The 16th International Conference on Availability, Reliability and Security
In this paper, we analyzed the perceived accuracy of COVID-19 vaccine Tweets when they were spoken back by a third-party Amazon Alexa skill. We mimicked the soft moderation that Twitter applies to COVID-19 misinformation content in both forms of warning covers and warning tags to investigate whether the third-party skill could affect how and when users heed these warnings. The results from a 304-participant study suggest that the spoken back warning covers may not work as intended, even when… 
1 Citations

Figures from this paper

Misinformation warnings: Twitter’s soft moderation effects on COVID-19 vaccine belief echoes
TLDR
Surprisingly, it is found that the belief echoes are strong enough to preclude adult Twitter users to receive the COVID-19 vaccine regardless of their education level.

References

SHOWING 1-10 OF 52 REFERENCES
Real Solutions for Fake News? Measuring the Effectiveness of General Warnings and Fact-Check Tags in Reducing Belief in False Stories on Social Media
Social media has increasingly enabled “fake news” to circulate widely, most notably during the 2016 U.S. presidential campaign. These intentionally false or misleading stories threaten the democratic
Social Media COVID-19 Misinformation Interventions Viewed Positively, But Have Limited Impact
TLDR
It was found that most participants indicated a positive attitude towards interventions, particularly post-specific labels for misinformation, suggesting room for platforms to do more to stem the spread of COVID-19 misinformation.
Prior Exposure Increases Perceived Accuracy of Fake News
TLDR
It is shown that even a single exposure increases subsequent perceptions of accuracy, both within the same session and after a week, and that social media platforms help to incubate belief in blatantly false news stories and that tagging such stories as disputed is not an effective solution to this problem.
To tweet or not to tweet: covertly manipulating a Twitter debate on vaccines using malware-induced misperceptions
TLDR
An alternative tactic for covert social media interference by inducing misperceptions about genuine, non-trolling content from verified users is explored, which proposes a solution for countering the effect of the malware-induced misperception that could also be used against trolls and social bots on Twitter.
Contemporary Presidency : Going Public in an Era of Social Media: Tweets, Corrections, and Public Opinion
Presidents invariably use the bully pulpit to push a political agenda, but whether this leads to political success in advancing that agenda has long been the subject of debate. The increased reliance
Beyond Trolling: Malware-Induced Misperception Attacks on Polarized Facebook Discourse
TLDR
It is demonstrated that inducing misperception is an effective tactic to silence or provoke targeted users on Facebook to express their opinion on a polarizing political issue.
Adapting Social Spam Infrastructure for Political Censorship
TLDR
It is shown how Twitter's relevance-based search helped mitigate the attack's impact on users searching for information regarding the Russian election, demonstrating how malicious parties can adapt the services and techniques traditionally used by spammers to other forms of attack, including censorship.
When Corrections Fail: The Persistence of Political Misperceptions
An extensive literature addresses citizen ignorance, but very little research focuses on misperceptions. Can these false or unsubstantiated beliefs about politics be corrected? Previous studies have
Parlermonium: A Data-Driven UX Design Evaluation of the Parler Platform
TLDR
Because platforms like Parler are disruptive to the social media landscape, it is believed the evaluation uniquely uncovers the platform’s conductivity to the spread of misinformation.
Belief Echoes: The Persistent Effects of Corrected Misinformation
Across three separate experiments, I find that exposure to negative political information continues to shape attitudes even after the information has been effectively discredited. I call these
...
1
2
3
4
5
...