MemeTector: Enforcing deep focus for meme detection

  title={MemeTector: Enforcing deep focus for meme detection},
  author={Christos Koutlis and Emmanouil Schinas and Symeon Papadopoulos},
Image memes and specifically their widely-known variation image macros , is a special new media type that combines text with images and is used in social media to playfully or subtly express humour, irony, sarcasm and even hate. It is important to accurately retrieve image memes from social media to better capture the cultural and social aspects of online phenomena and detect potential issues (hate-speech, disinformation). Essentially, the background image of an image macro is a regular image… 

Figures and Tables from this paper



TextFuseNet: Scene Text Detection with Richer Fused Features

The proposed TextFuseNet can learn a more adequate description of arbitrary shapes texts, suppressing false positives and producing more accurate detection results, and can be trained with weak supervision for those datasets that lack character-level annotations.

EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks

A new scaling method is proposed that uniformly scales all dimensions of depth/width/resolution using a simple yet highly effective compound coefficient and is demonstrated the effectiveness of this method on scaling up MobileNets and ResNet.

Deep Residual Learning for Image Recognition

This work presents a residual learning framework to ease the training of networks that are substantially deeper than those used previously, and provides comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth.

Very Deep Convolutional Networks for Large-Scale Image Recognition

This work investigates the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting using an architecture with very small convolution filters, which shows that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 weight layers.

The Hateful Memes Challenge: Competition Report

The Hateful Memes Challenge competition, held at NeurIPS 2020, focusing on multimodal hate speech, is described and the aim of the challenge is to facilitate further research into multi-modal reasoning and understanding.

Two-Way Feature Extraction Using Sequential and Multimodal Approach for Hateful Meme Classification

Two different approaches to solve the problem of identifying hate memes are proposed which utilize a combination of glove, encoder-decoder, and OCR with Adamax optimizer deep learning algorithms and sentiment analysis based on image captioning and text written on the meme.

DANKMEMES @ EVALITA 2020: The Memeing of Life: Memes, Multimodality and Politics

DANKMEMES is a shared task proposed for the 2020 EVALITA campaign, focusing on the automatic classification of Internet memes, and features three tasks: A) Meme Detection, B) Hate Speech Identification, and C) Event Clustering.

UPB @ DANKMEMES: Italian Memes Analysis - Employing Visual Models and Graph Convolutional Networks for Meme Identification and Hate Speech Detection (short paper)

This paper describes the approach for the DANKMEMES competition from EVALITA 2020 consisting of a multimodal multi-task learning architecture based on two main components, a Graph Convolutional Network combined with an Italian BERT for text encoding and an image representation.

SNK @ DANKMEMES: Leveraging Pretrained Embeddings for Multimodal Meme Detection (short paper)

This paper describes and presents the results of meme detection system, specifically developed and submitted for participation to the first subtask of DANKMEMES (EVALITA 2020), and built simple classifiers, consisting in feed forward neural networks, both for text and image representation.