Neutral bots probe political bias on social media

@article{Chen2021NeutralBP,
  title={Neutral bots probe political bias on social media},
  author={Wen Chen and Diogo Pacheco and Kai-Cheng Yang and Filippo Menczer},
  journal={Nature Communications},
  year={2021},
  volume={12}
}
Social media platforms attempting to curb abuse and misinformation have been accused of political bias. We deploy neutral social bots who start following different news sources on Twitter, and track them to probe distinct biases emerging from platform mechanisms versus user interactions. We find no strong or consistent evidence of political bias in the news feed. Despite this, the news and information to which U.S. Twitter users are exposed depend strongly on the political leaning of their… 
A general framework to link theory and empirics in opinion formation models
TLDR
This work introduces a minimal opinion formation model that is quite flexible and can reproduce a wide variety of the existing micro-influence assumptions and models, and generates an artificial society that features properties quantitatively and qualitatively similar to those observed empirically at the macro scale.
Botometer 101: Social bot practicum for computational social scientists
TLDR
This paper aims to provide an introductory tutorial of Botometer, a public tool for bot detection on Twitter, for readers who are new to this topic and may not be familiar with programming and machine learning.
Differences in Behavioral Characteristics and Diffusion Mechanisms: A Comparative Analysis Based on Social Bots and Human Users
TLDR
There were significant differences in behavioral characteristics and diffusion mechanisms between bot users and human users during public opinion dissemination, which can help guide the public to pay attention to topic shifting and promote the diffusion of positive emotions in social networks, which in turn can better achieve emergency management of emergencies and the maintenance of online orders.
Engagement Outweighs Exposure to Partisan and Unreliable News within Google Search
TLDR
Using surveys paired with engagement and exposure data collected around the 2018 and 2020 US elections, it is found that strong Republicans engaged with more partisan and unreliable news than strong Democrats did, despite the two groups being exposed to similar amounts of partisan and reliable news in their Google search results.
Political audience diversity and news reliability in algorithmic ranking
TLDR
It is shown that websites with more extreme and less politically diverse audiences have lower journalistic standards, and an improved algorithm is proposed that increases the trustworthiness of websites suggested to users-especially those who most frequently consume misinformation-while keeping recommendations relevant.
YouTube, The Great Radicalizer? Auditing and Mitigating Ideological Biases in YouTube Recommendations
TLDR
A systematic audit of YouTube's recommendation system finds that YouTube’s recommendations do direct users – especially right-leaning users – to ideologically biased and increasingly radical content on both homepages and in up-next recommendations, but bias can be mitigated through an intervention.
The voice of few, the opinions of many: evidence of social biases in Twitter COVID-19 fake news sharing
Online platforms play a relevant role in the creation and diffusion of false or misleading news. Concerningly, the COVID-19 pandemic is shaping a communication network - barely considered in the

References

SHOWING 1-10 OF 70 REFERENCES
Asymmetrical perceptions of partisan political bots
TLDR
Investigating the ability to differentiate bots with partisan personas from humans on Twitter reveals asymmetrical partisan-motivated reasoning, in that conservative profiles appear to be more confusing and Republican participants perform less well in the recognition task.
Right and left, partisanship predicts (asymmetric) vulnerability to misinformation
We analyze the relationship between partisanship, echo chambers, and vulnerability to online misinformation by studying news sharing behavior on Twitter. While our results confirm prior findings that
Shared partisanship dramatically increases social tie formation in a Twitter field experiment
TLDR
A field experiment on Twitter shows a strong causal effect of shared partisanship on the formation of social ties in an ecologically valid field setting and has important implications for political psychology, social media, and the politically polarized state of the American public.
Auditing radicalization pathways on YouTube
TLDR
A large-scale audit of user radicalization on YouTube shows that the three channel types indeed increasingly share the same user base; that users consistently migrate from milder to more extreme content; and that a large percentage of users who consume Alt-right content now consumed Alt-lite and I.D.W. content in the past.
Exposure to Social Engagement Metrics Increases Vulnerability to Misinformation
TLDR
It is found that exposure to social engagement signals increases the vulnerability of users to misinformation, and it is called for technology platforms to rethink the display of social engagement metrics.
Scalable and Generalizable Social Bot Detection through Data Selection
TLDR
This paper proposes a framework that uses minimal account metadata, enabling efficient analysis that scales up to handle the full stream of public tweets of Twitter in real time, and finds that strategically selecting a subset of training data yields better model accuracy and generalization than exhaustively training on all available data.
Shared Partisanship Dramatically Increases Social Tie Formation in a Twitter Field Experiment
Americans are much more likely to be socially connected to co-partisans, both in daily life and on social media. But this observation does not necessarily mean that shared partisanship per se drives
Arming the public with artificial intelligence to counter social bots
TLDR
The case study of Botometer, a popular bot detection tool developed at Indiana University, is used to illustrate how people interact with AI countermeasures and how future AI developments may affect the fight between malicious bots and the public.
Bot Electioneering Volume: Visualizing Social Bot Activity During Elections
TLDR
A web application to help the public explore the activities of likely bots on Twitter on a daily basis and reports on the level of likely bot activities and visualizes the topics targeted by them is deployed.
...
1
2
3
4
5
...