Amobee at SemEval-2018 Task 1: GRU Neural Network with a CNN Attention Mechanism for Sentiment Classification

@inproceedings{Rozental2018AmobeeAS,
  title={Amobee at SemEval-2018 Task 1: GRU Neural Network with a CNN Attention Mechanism for Sentiment Classification},
  author={Alon Rozental and Daniel Fleischer},
  booktitle={*SEMEVAL},
  year={2018}
}
This paper describes the participation of Amobee in the shared sentiment analysis task at SemEval 2018. We participated in all the English sub-tasks and the Spanish valence tasks. Our system consists of three parts: training task-specific word embeddings, training a model consisting of gated-recurrentunits (GRU) with a convolution neural network (CNN) attention mechanism and training stacking-based ensembles for each of the subtasks. Our algorithm reached 3rd and 1st places in the valence… 

Figures and Tables from this paper

Amobee at IEST 2018: Transfer Learning from Language Models

This paper describes the system developed at Amobee for the WASSA 2018 implicit emotions shared task (IEST), to predict the emotion expressed by missing words in tweets without an explicit mention of those words.

Related Tasks can Share! A Multi-task Framework for Affective language

Evaluation and analysis suggest that joint-learning of the related tasks in a multi-task framework can outperform each of the individual tasks in the single-task frameworks.

Affect inTweets: A Transfer Learning Approach

It is shown that by leveraging the pre-learned knowledge, transfer learning models can achieve competitive results in the affectual content analysis of tweets, compared to the traditional models.

Bidirectional-GRU Based on Attention Mechanism for Aspect-level Sentiment Analysis

A bidirectional gated recurrent units neural network model that integrates the attention mechanism to solve the task of aspect-level sentiment analysis and achieves good performance at different datasets and has further improvement comparing to previous models.

Bidirectional Dilated LSTM with Attention for Fine-grained Emotion Classification in Tweets

This work proposes a novel approach for fine-grained emotion classification in tweets using a Bidirectional Dilated LSTM (BiDLSTM) with attention that is able to maintain complex data dependencies over time.

Gated Recurrent Neural Network Approach for Multilabel Emotion Detection in Microblogs

A novel Pyramid Attention Network (PAN) based model for emotion detection in microblogs that has the capability to evaluate sentences in different perspectives to capture multiple emotions existing in a single text is proposed.

AI Deep Learning with Multiple Labels for Sentiment Classification of Tweets

This work introduces an incremental transfer learning pipeline for AI systems for ordinal classification based on multiple labels of Tweets, and achieves a Pearson correlation coefficient of 0.806 on the test data of SemEval-2018, which would have ranked the 4th in the SemE evaluations Task 1 Subtask V-oc.

IEST: WASSA-2018 Implicit Emotions Shared Task

A shared task where systems have to predict the emotions in a large automatically labeled dataset of tweets without access to words denoting emotions, and is called the Implicit Emotion Shared Task (IEST) because the systems has to infer the emotion mostly from the context.

Attention-based BiGRU-CNN for Chinese question classification

A novel deep neural network model is proposed, Attention-Based BiGRU-CNN network (ABBC), which combines the characteristics and advantages of convolutional neural network, attention mechanism and recurrent neural network and achieves the best performance in the Chinese question classification task.

Sentiment Classification Based on Part-of-Speech and Self-Attention Mechanism

Part-of-Speech based Transformer Attention Network (pos-TAN) is proposed, which not only uses the Self-Attention mechanism to learn the feature expression of the text but also incorporates the POS-Att attention, which uses to capture sentimental information contained in part- of-speech.

References

SHOWING 1-10 OF 19 REFERENCES

BB_twtr at SemEval-2017 Task 4: Twitter Sentiment Analysis with CNNs and LSTMs

This attempt at producing a state-of-the-art Twitter sentiment classifier using Convolutional Neural Networks (CNNs) and Long Short Term Memory (LSTMs) networks using a large amount of unlabeled data to pre-train word embeddings.

SemEval-2018 Task 1: Affect in Tweets

This work presents the SemEval-2018 Task 1: Affect in Tweets, which includes an array of subtasks on inferring the affectual state of a person from their tweet, with a focus on the techniques and resources that are particularly useful.

Learning Sentiment-Specific Word Embedding for Twitter Sentiment Classification

Three neural networks are developed to effectively incorporate the supervision from sentiment polarity of text (e.g. sentences or tweets) in their loss functions and the performance of SSWE is improved by concatenating SSWE with existing feature set.

VADER: A Parsimonious Rule-Based Model for Sentiment Analysis of Social Media Text

Interestingly, using the authors' parsimonious rule-based model to assess the sentiment of tweets, it is found that VADER outperforms individual human raters, and generalizes more favorably across contexts than any of their benchmarks.

Enriching Word Vectors with Subword Information

A new approach based on the skipgram model, where each word is represented as a bag of character n-grams, with words being represented as the sum of these representations, which achieves state-of-the-art performance on word similarity and analogy tasks.

Understanding Emotions: A Dataset of Tweets to Study Interactions between Affect Categories

The goal is to create a single textual dataset that is annotated for many emotion (or affect) dimensions (from both the basic emotion model and the VAD model), and it is shown that the fine-grained intensity scores thus obtained are reliable (repeat annotations lead to similar scores).

Word Affect Intensities

This work creates an affect intensity lexicon with real-valued scores of association, using a technique called best-worst scaling that improves annotation consistency and obtains reliable fine-grained scores.

TensorFlow: A system for large-scale machine learning

The TensorFlow dataflow model is described and the compelling performance that Tensor Flow achieves for several real-world applications is demonstrated.

Neural Machine Translation by Jointly Learning to Align and Translate

It is conjecture that the use of a fixed-length vector is a bottleneck in improving the performance of this basic encoder-decoder architecture, and it is proposed to extend this by allowing a model to automatically (soft-)search for parts of a source sentence that are relevant to predicting a target word, without having to form these parts as a hard segment explicitly.

On the Properties of Neural Machine Translation: Encoder–Decoder Approaches

It is shown that the neural machine translation performs relatively well on short sentences without unknown words, but its performance degrades rapidly as the length of the sentence and the number of unknown words increase.