Characterizing and Detecting Hateful Users on Twitter

  title={Characterizing and Detecting Hateful Users on Twitter},
  author={Manoel Horta Ribeiro and Pedro H. Calais and Yuri A. Santos and Virg{\'i}lio A. F. Almeida and Wagner Meira Jr},
Current approaches to characterize and detect hate speech focus on content posted in Online Social Networks (OSNs). They face shortcomings to get the full picture of hate speech due to its subjectivity and the noisiness of OSN text. This work partially addresses these issues by shifting the focus towards users. We obtain a sample of Twitter's retweet graph with 100,386 users and annotate 4,972 as hateful or normal, and also find 668 users suspended after 4 months. Our analysis shows that… 

Graph-Based Methods to Detect Hate Speech Diffusion on Twitter

  • Matthew Beatty
  • Computer Science
    2020 IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining (ASONAM)
  • 2020
It is demonstrated that while the methods do not outperform state-of-the-art text models, graph-based models provide robust detection mechanisms and are able to detect instances of hate speech that fool text classifiers.

Interaction dynamics between hate and counter users on Twitter

The interaction dynamics of the hate and counter users could pave a more effective way for combating hate content on Twitter than just suspending the hate accounts.

You too Brutus! Trapping Hateful Users in Social Media: Challenges, Solutions & Insights

This paper investigates an array of models ranging from purely textual to graph based to finally semi-supervised techniques using Graph Neural Networks (GNN) that utilize both textual and graph-based features and observes that hateful users have unique network neighborhood signatures and the AGNN model benefits by paying attention to these signatures.

Towards Identification, Classification and Analysis of Hate Speech on Social Media

This thesis focuses on developing automated techniques to identify hate-speech detection on social media, and proposes composite models which utilizes the capability of deep models for generating concise representations along with the power of traditional machine learning classifiers.

The Virality of Hate Speech on Social Media

Important determinants that explain differences in the spreading of hateful vs. normal content are identified and novel insights into the virality of hate speech on social media are offered.

Retweet communities reveal the main sources of hate speech

This work carefully annotates a large set of tweets for hate speech, and deploy advanced deep learning to produce high quality hate speech classification models, which are applied to three years of Slovenian Twitter data.

Detecting Online Hate Speech: Approaches Using Weak Supervision and Network Embedding Models

This work proposes a weak supervision deep learning model that quantitatively uncover hateful users and presents a novel qualitative analysis to uncover indirect hateful conversations, and utilizes the multilayer network embedding methods to generate features for the prediction task.

Spread of Hate Speech in Online Social Media

This study performs the first cross-sectional view of how hateful users diffuse hate content in online social media on Gab and finds that the hateful users are far more densely connected among themselves.

Going Extreme: Comparative Analysis of Hate Speech in Parler and Gab

This work provides the first large scale analysis of hate-speech on Parler, among the first works to analyze hate speech in Parler in a quantitative manner and on the user level, and the first annotated dataset to be made available to the community.

Analyzing the hate and counter speech accounts on Twitter

This paper analyzes hate speech and the corresponding counters (aka counterspeech) on Twitter and finds that the hate tweets by verified accounts have much more virality as compared to a tweet by a non-verified account.



Hateful Symbols or Hateful People? Predictive Features for Hate Speech Detection on Twitter

A list of criteria founded in critical race theory is provided, and these are used to annotate a publicly available corpus of more than 16k tweets and present a dictionary based the most indicative words in the data.

Mean Birds: Detecting Aggression and Bullying on Twitter

It is found that bullies post less, participate in fewer online communities, and are less popular than normal users, while Aggressors are relatively popular and tend to include more negativity in their posts.

Detecting the Hate Code on Social Media

By generating the list of users who post hate filled posts or Tweets, this work moves a step forward from classifying tweets by allowing us to study the usage pattern of these concentrated set of users.

Analyzing the Targets of Hate in Online Social Media

This paper provides the first of a kind systematic large scale measurement study of the main targets of hate speech in online social media, gathering traces from two social media systems: Whisper and Twitter and developing and validate a methodology to identify hate speech on both these systems.

Detecting Spammers on Twitter

This paper uses tweets related to three famous trending topics from 2009 to construct a large labeled collection of users, manually classified into spammers and non-spammers, and identifies a number of characteristics related to tweet content and user social behavior which could potentially be used to detect spammers.

Locate the Hate: Detecting Tweets against Blacks

A supervised machine learning approach is applied, employing inexpensively acquired labeled data from diverse Twitter accounts to learn a binary classifier for the labels “racist” and “nonracist", which has a 76% average accuracy on individual tweets, suggesting that with further improvements, this work can contribute data on the sources of anti-black hate speech.

Are You a Racist or Am I Seeing Things? Annotator Influence on Hate Speech Detection on Twitter

It is found that amateur annotators are more likely than expert annotators to label items as hate speech, and that systems training on expert annotations outperform systems trained on amateur annotations.

Hate is not Binary: Studying Abusive Behavior of #GamerGate on Twitter

Surprisingly, it is found that Gamergaters are less likely to be suspended by Twitter, thus their properties are analyzed to identify differences from typical users and what may have led to their suspension.

Hate Speech Detection with Comment Embeddings

This work proposes to learn distributed low-dimensional representations of comments using recently proposed neural language models, that can then be fed as inputs to a classification algorithm, resulting in highly efficient and effective hate speech detectors.

Automated Hate Speech Detection and the Problem of Offensive Language

This work used a crowd-sourced hate speech lexicon to collect tweets containing hate speech keywords and labels a sample of these tweets into three categories: those containinghate speech, only offensive language, and those with neither.