Socialbots on Fire: Modeling Adversarial Behaviors of Socialbots via Multi-Agent Hierarchical Reinforcement Learning

  title={Socialbots on Fire: Modeling Adversarial Behaviors of Socialbots via Multi-Agent Hierarchical Reinforcement Learning},
  author={Thai Le and tql},
  journal={Proceedings of the ACM Web Conference 2022},
  • Thai Letql
  • Published 20 October 2021
  • Computer Science
  • Proceedings of the ACM Web Conference 2022
Socialbots are software-driven user accounts on social platforms, acting autonomously (mimicking human behavior), with the aims to influence the opinions of other users or spread targeted misinformation for particular goals. As socialbots undermine the ecosystem of social platforms, they are often considered harmful. As such, there have been several computational efforts to auto-detect the socialbots. However, to our best knowledge, the adversarial nature of these socialbots has not yet been… 

Figures and Tables from this paper



CLAIM: Curriculum Learning Policy for Influence Maximization in Unknown Social Networks

This work proposes CLAIM - C urriculum L e A rning P olicy for I nfluence M aximization to improve the sample efficiency of RL methods and conducts experiments on real-world datasets to show that this approach can outperform the current best approach.

Arming the public with artificial intelligence to counter social bots

The case study of Botometer, a popular bot detection tool developed at Indiana University, is used to illustrate how people interact with AI countermeasures and how future AI developments may affect the fight between malicious bots and the public.

Detection of Novel Social Bots by Ensembles of Specialized Classifiers

A new supervised learning method that trains classifiers specialized for each class of bots and combines their decisions through the maximum rule is proposed, leading to an average improvement of 56% in F1 score for unseen accounts across datasets and novel bot behaviors are learned with fewer labeled examples during retraining.

Better Safe Than Sorry: An Adversarial Approach to Improve Social Bot Detection

This paper proposes and experiments with a novel genetic algorithm that allows to create synthetic evolved versions of current state-of-the-art social bots, and demonstrates that synthetic bots really escape current detection techniques.

The coming age of adversarial social bot detection

Inspired by adversarial machine learning and computer security, this work proposes an adversarial and proactive approach to social bot detection, and calls scholars to arms to shed light on this open and intriguing field of study.

Influence Maximization in Unknown Social Networks: Learning Policies for Effective Graph Sampling

This work proposes a reinforcement learning framework for network discovery that automatically learns useful node and graph representations that encode important structural properties of the network.

Behavior enhanced deep bot detection in social media

  • C. CaiLinjing LiD. Zeng
  • Computer Science
    2017 IEEE International Conference on Intelligence and Security Informatics (ISI)
  • 2017
This paper proposes a behavior enhanced deep model (BeDM) for bot detection that regards user content as temporal text data instead of plain text to extract latent temporal patterns and fuses content information and behavior information using deep learning method.

Maximizing the spread of influence through a social network

An analysis framework based on submodular functions shows that a natural greedy strategy obtains a solution that is provably within 63% of optimal for several classes of models, and suggests a general approach for reasoning about the performance guarantees of algorithms for these types of influence problems in social networks.

MALCOM: Generating Malicious Comments to Attack Neural Fake News Detection Models

Malcom, an end-to-end adversarial comment generation framework, is developed that can successfully mislead five of the latest neural detection models to always output targeted real and fake news labels.