Socialbots on Fire: Modeling Adversarial Behaviors of Socialbots via Multi-Agent Hierarchical Reinforcement Learning
@article{Le2021SocialbotsOF, title={Socialbots on Fire: Modeling Adversarial Behaviors of Socialbots via Multi-Agent Hierarchical Reinforcement Learning}, author={Thai Le and tql}, journal={Proceedings of the ACM Web Conference 2022}, year={2021} }
Socialbots are software-driven user accounts on social platforms, acting autonomously (mimicking human behavior), with the aims to influence the opinions of other users or spread targeted misinformation for particular goals. As socialbots undermine the ecosystem of social platforms, they are often considered harmful. As such, there have been several computational efforts to auto-detect the socialbots. However, to our best knowledge, the adversarial nature of these socialbots has not yet been…
Figures and Tables from this paper
References
SHOWING 1-10 OF 54 REFERENCES
CLAIM: Curriculum Learning Policy for Influence Maximization in Unknown Social Networks
- Computer ScienceUAI
- 2021
This work proposes CLAIM - C urriculum L e A rning P olicy for I nfluence M aximization to improve the sample efficiency of RL methods and conducts experiments on real-world datasets to show that this approach can outperform the current best approach.
Arming the public with artificial intelligence to counter social bots
- Computer ScienceHuman Behavior and Emerging Technologies
- 2019
The case study of Botometer, a popular bot detection tool developed at Indiana University, is used to illustrate how people interact with AI countermeasures and how future AI developments may affect the fight between malicious bots and the public.
Detection of Novel Social Bots by Ensembles of Specialized Classifiers
- Computer ScienceCIKM
- 2020
A new supervised learning method that trains classifiers specialized for each class of bots and combines their decisions through the maximum rule is proposed, leading to an average improvement of 56% in F1 score for unseen accounts across datasets and novel bot behaviors are learned with fewer labeled examples during retraining.
Better Safe Than Sorry: An Adversarial Approach to Improve Social Bot Detection
- Computer ScienceWebSci
- 2019
This paper proposes and experiments with a novel genetic algorithm that allows to create synthetic evolved versions of current state-of-the-art social bots, and demonstrates that synthetic bots really escape current detection techniques.
The coming age of adversarial social bot detection
- Computer ScienceFirst Monday
- 2021
Inspired by adversarial machine learning and computer security, this work proposes an adversarial and proactive approach to social bot detection, and calls scholars to arms to shed light on this open and intriguing field of study.
Influence Maximization in Unknown Social Networks: Learning Policies for Effective Graph Sampling
- Computer ScienceAAMAS
- 2020
This work proposes a reinforcement learning framework for network discovery that automatically learns useful node and graph representations that encode important structural properties of the network.
A novel framework for detecting social bots with deep neural networks and active learning
- Computer ScienceKnowl. Based Syst.
- 2021
Behavior enhanced deep bot detection in social media
- Computer Science2017 IEEE International Conference on Intelligence and Security Informatics (ISI)
- 2017
This paper proposes a behavior enhanced deep model (BeDM) for bot detection that regards user content as temporal text data instead of plain text to extract latent temporal patterns and fuses content information and behavior information using deep learning method.
Maximizing the spread of influence through a social network
- Computer ScienceKDD '03
- 2003
An analysis framework based on submodular functions shows that a natural greedy strategy obtains a solution that is provably within 63% of optimal for several classes of models, and suggests a general approach for reasoning about the performance guarantees of algorithms for these types of influence problems in social networks.
MALCOM: Generating Malicious Comments to Attack Neural Fake News Detection Models
- Computer Science2020 IEEE International Conference on Data Mining (ICDM)
- 2020
Malcom, an end-to-end adversarial comment generation framework, is developed that can successfully mislead five of the latest neural detection models to always output targeted real and fake news labels.