To BAN or Not to BAN: Bayesian Attention Networks for Reliable Hate Speech Detection

@article{Miok2022ToBO,
  title={To BAN or Not to BAN: Bayesian Attention Networks for Reliable Hate Speech Detection},
  author={Kristian Miok and Bla{\vz} {\vS}krlj and Daniela Zaharie and M. Robnik-Sikonja},
  journal={Cognitive Computation},
  year={2022},
  volume={14},
  pages={353-371}
}
Hate speech is an important problem in the management of user-generated content. To remove offensive content or ban misbehaving users, content moderators need reliable hate speech detectors. Recently, deep neural networks based on the transformer architecture, such as the (multilingual) BERT model, have achieved superior performance in many natural language classification tasks, including hate speech detection. So far, these methods have not been able to quantify their output in terms of… 

Bayesian Methods for Semi-supervised Text Annotation

TLDR
The proposed semi-supervised methods can improve the annotations and prediction performance of BERT models and a recently proposed Bayesian ensemble method helps to combine the annotators' labels with predictions of trained models.

ULFRI at SemEval-2022 Task 4: Leveraging uncertainty and additional knowledge for patronizing and condescending language detection

TLDR
The ULFRI system used in the Subtask 1 of SemEval-2022 Task 4 Patronizing and condescending language detection is described and the injection of additional knowledge is not helpful but the uncertainty management mechanisms lead to small but consistent improvements.

Efficient, Uncertainty-based Moderation of Neural Networks Text Classifiers

TLDR
A semi-automated approach that uses prediction uncertainties to pass unconfident, probably incorrect classifications to human moderators to minimize the workload, and can improve the classification F1-scores of modern Neural Networks classifiers.

HCovBi-Caps: Hate Speech Detection Using Convolutional and Bi-Directional Gated Recurrent Unit With Capsule Network

TLDR
This study presents a novel Convolutional, BiGRU, and Capsule network-based deep learning model, HCovBi-Caps, to classify the hate speech and demonstrates a significantly better performance than state-of-the-art approaches.

Guest Editorial: A Decade of Sentic Computing

The opportunity to capture the opinions of the general public has raised growing interest both within the scientific community, leading to many exciting open challenges, and in the business world due

Ten Years of Sentic Computing

TLDR
This paper reviews all models, resources, algorithms, and applications developed together with the key shifts and tasks introduced by sentic computing in the context of affective computing and sentiment analysis and discusses future directions in these fields.

Comprehensive Exploration of Machine Learning based models in Digital Forensics – A plunge into Hate Speech Detection

  • BarkhashreeParneeta Dhaliwal
  • Computer Science
    2021 3rd International Conference on Advances in Computing, Communication Control and Networking (ICAC3N)
  • 2021
TLDR
This paper will act as a guide for future researchers of this domain by showing them the path for selecting most suitable machine learning based models.

Notebook for PAN at CLEF 2021

TLDR
A model to detect hate speech spreaders based on their Twitter posts and aggregate contextualized embeddings of single tweets to form a vector representation for every user and employ classification methods to find users spreading hate speech.

References

SHOWING 1-10 OF 90 REFERENCES

Universal Sentence Encoder

TLDR
It is found that transfer learning using sentence embeddings tends to outperform word level transfer with surprisingly good performance with minimal amounts of supervised training data for a transfer task.

Cross-lingual embeddings for hate speech detection in comments

TLDR
This work uses cross-lingual embeddings to achieve an acceptable performance in hate speech detection in a target language, using data from another language, and improves upon the existing Multilingual BERT method.

Datasets of Slovene and Croatian Moderated News Comments

This paper presents two large newly constructed datasets of moderated news comments from two highly popular online news portals in the respective countries: the Slovene RTV MCC and the Croatian

BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding

TLDR
A new language representation model, BERT, designed to pre-train deep bidirectional representations from unlabeled text by jointly conditioning on both left and right context in all layers, which can be fine-tuned with just one additional output layer to create state-of-the-art models for a wide range of tasks.

Suspended Accounts: A Source of Tweets with Disgust and Anger Emotions for Augmenting Hate Speech Data Sample

TLDR
The text that are produced from the suspended accounts in the aftermath of a hateful event is proposed to use as subtle and reliable source for hate speech prediction and two Random Forest classifiers are trained based on the semantic meaning of tweets respectively from suspended and active accounts.

Prediction Uncertainty Estimation for Hate Speech Classification

TLDR
To reliably detect hate speech, Monte Carlo dropout regularization is used, which mimics Bayesian inference within neural networks, and the reliability of predictions is usually not addressed in text classification.

On Calibration of Modern Neural Networks

TLDR
It is discovered that modern neural networks, unlike those from a decade ago, are poorly calibrated, and on most datasets, temperature scaling -- a single-parameter variant of Platt Scaling -- is surprisingly effective at calibrating predictions.

Attention is All you Need

TLDR
A new simple network architecture, the Transformer, based solely on attention mechanisms, dispensing with recurrence and convolutions entirely is proposed, which generalizes well to other tasks by applying it successfully to English constituency parsing both with large and limited training data.

Bayesian Recurrent Neural Networks

TLDR
This work shows that a simple adaptation of truncated backpropagation through time can yield good quality uncertainty estimates and superior regularisation at only a small extra computational cost during training, and demonstrates how a novel kind of posterior approximation yields further improvements to the performance of Bayesian RNNs.
...