Neutral bots probe political bias on social media

@article{Chen2021NeutralBP,
  title={Neutral bots probe political bias on social media},
  author={Wen Chen and Diogo Pacheco and Kai-Cheng Yang and Filippo Menczer},
  journal={Nature Communications},
  year={2021},
  volume={12}
}
Social media platforms attempting to curb abuse and misinformation have been accused of political bias. We deploy neutral social bots who start following different news sources on Twitter, and track them to probe distinct biases emerging from platform mechanisms versus user interactions. We find no strong or consistent evidence of political bias in the news feed. Despite this, the news and information to which U.S. Twitter users are exposed depend strongly on the political leaning of their… 
The Disinformation Dozen: An Exploratory Analysis of Covid-19 Disinformation Proliferation on Twitter
TLDR
In this study, an exploratory analysis on Disinfo12’s activity on Twitter is performed aiming at identifying their sharing strategies, favorite sources of information, and potential secondary actors contributing to the proliferation of questionable narratives.
Investigating Fake and Reliable News Sources Using Complex Networks Analysis
The rise of disinformation in the last years has shed light on the presence of bad actors that produce and spread misleading content every day. Therefore, looking at the characteristics of these
Differences in Behavioral Characteristics and Diffusion Mechanisms: A Comparative Analysis Based on Social Bots and Human Users
TLDR
There were significant differences in behavioral characteristics and diffusion mechanisms between bot users and human users during public opinion dissemination, which can help guide the public to pay attention to topic shifting and promote the diffusion of positive emotions in social networks, which in turn can better achieve emergency management of emergencies and the maintenance of online orders.
A general framework to link theory and empirics in opinion formation models
TLDR
This work introduces a minimal opinion formation model that is quite flexible and can reproduce a wide variety of the existing micro-influence assumptions and models, and generates an artificial society that features properties quantitatively and qualitatively similar to those observed empirically at the macro scale.
YouTube, The Great Radicalizer? Auditing and Mitigating Ideological Biases in YouTube Recommendations
TLDR
A systematic audit of YouTube's recommendation system finds that YouTube’s recommendations do direct users – especially right-leaning users – to ideologically biased and increasingly radical content on both homepages and in up-next recommendations, but bias can be mitigated through an intervention.
Botometer 101: Social bot practicum for computational social scientists
TLDR
This paper aims to provide an introductory tutorial of Botometer, a public tool for bot detection on Twitter, for readers who are new to this topic and may not be familiar with programming and machine learning.
Engagement Outweighs Exposure to Partisan and Unreliable News within Google Search
TLDR
Using surveys paired with engagement and exposure data collected around the 2018 and 2020 US elections, it is found that strong Republicans engaged with more partisan and unreliable news than strong Democrats did, despite the two groups being exposed to similar amounts of partisan and reliable news in their Google search results.
Political audience diversity and news reliability in algorithmic ranking
TLDR
It is shown that websites with more extreme and less politically diverse audiences have lower journalistic standards, and an improved algorithm is proposed that increases the trustworthiness of websites suggested to users-especially those who most frequently consume misinformation-while keeping recommendations relevant.
The voice of few, the opinions of many: evidence of social biases in Twitter COVID-19 fake news sharing
Online platforms play a relevant role in the creation and diffusion of false or misleading news. Concerningly, the COVID-19 pandemic is shaping a communication network - barely considered in the
...
...

References

SHOWING 1-10 OF 69 REFERENCES
Right and left, partisanship predicts (asymmetric) vulnerability to misinformation
We analyze the relationship between partisanship, echo chambers, and vulnerability to online misinformation by studying news sharing behavior on Twitter. While our results confirm prior findings that
Shared Partisanship Dramatically Increases Social Tie Formation in a Twitter Field Experiment
Americans are much more likely to be socially connected to copartisans, both in daily life and on social media. However, this observation does not necessarily mean that shared partisanship per se
Social influence and unfollowing accelerate the emergence of echo chambers
TLDR
Although the findings suggest that echo chambers are somewhat inevitable given the mechanisms at play in online social media, they also provide insights into possible mitigation strategies.
Asymmetrical perceptions of partisan political bots
TLDR
Investigating the ability to differentiate bots with partisan personas from humans on Twitter reveals asymmetrical partisan-motivated reasoning, in that conservative profiles appear to be more confusing and Republican participants perform less well in the recognition task.
Exposure to Social Engagement Metrics Increases Vulnerability to Misinformation
TLDR
It is found that exposure to social engagement signals increases the vulnerability of users to misinformation, and it is called for technology platforms to rethink the display of social engagement metrics.
Scalable and Generalizable Social Bot Detection through Data Selection
TLDR
This paper proposes a framework that uses minimal account metadata, enabling efficient analysis that scales up to handle the full stream of public tweets of Twitter in real time, and finds that strategically selecting a subset of training data yields better model accuracy and generalization than exhaustively training on all available data.
Auditing radicalization pathways on YouTube
TLDR
A large-scale audit of user radicalization on YouTube shows that the three channel types indeed increasingly share the same user base; that users consistently migrate from milder to more extreme content; and that a large percentage of users who consume Alt-right content now consumed Alt-lite and I.D.W. content in the past.
The role of bot squads in the political propaganda on Twitter
TLDR
The authors take a complex networks approach to study the Twitter debate around the Italian migrant crisis, finding evidence for "bots squads” amplifying the tweets of a few key political figures.
...
...