Are GRU Cells More Specific and LSTM Cells More Sensitive in Motive Classification of Text?

@article{Gruber2020AreGC,
  title={Are GRU Cells More Specific and LSTM Cells More Sensitive in Motive Classification of Text?},
  author={N. Gruber and Alfred Jockisch},
  journal={Frontiers in Artificial Intelligence},
  year={2020},
  volume={3}
}
In the Thematic Apperception Test, a picture story exercise (TAT/PSE; Heckhausen, 1963), it is assumed that unconscious motives can be detected in the text someone is telling about pictures shown in the test. Therefore, this text is classified by trained experts regarding evaluation rules. We tried to automate this coding and used a recurrent neuronal network (RNN) because of the sequential input data. There are two different cell types to improve recurrent neural networks regarding long-term… 

Figures and Tables from this paper

Detecting dynamics of action in text with a recurrent neural network

  • N. Gruber
  • Computer Science
    Neural Comput. Appl.
  • 2021
I reanalyzed two datasets regarding category IS and found that because of its sequential structure the RNN detects different phrases in the text that are barely capable by human coder or other neural networks but are related to motive theories.

NewsMTSC: A Dataset for (Multi-)Target-dependent Sentiment Classification in Political News Articles

This paper introduces NewsMTSC, a high-quality dataset for TSC on news articles with key differences compared to established TSC datasets, including, for example, different means to express sentiment, longer texts, and a second test-set to measure the influence of multi-target sentences.

Memory-Based Deep Neural Attention (mDNA) for Cognitive Multi-Turn Response Retrieval in Task-Oriented Chatbots

This paper augments the Transformer-based retrieval chatbot architecture with a memory-based deep neural attention (mDNA) model by using an approach similar to late data fusion, and shows that the mDNA augmentation approach slightly outperforms selected state-of-the-art retrieval chat bot models.

macech at SemEval-2021 Task 5: Toxic Spans Detection

This paper uses data consisting of comments with the indices of toxic text labelled to train an RNN to deter-mine which parts of the comments make them toxic, which could aid online moderators.

Deep Sentiment Analysis: A Case Study on Stemmed Turkish Twitter Data

This paper addresses three data augmentation techniques namely Shift, Shuffle, and Hybrid to increase the size of the training data; and then uses three key types of deep learning models namely recurrent neural network (RNN), convolution neuralnetwork (CNN), and hierarchical attention network (HAN) to classify the stemmed Turkish Twitter data for sentiment analysis.

Heterogeneous Ensemble Deep Learning Model for Enhanced Arabic Sentiment Analysis

An optimized heterogeneous stacking ensemble model that combines three different of pre-trained Deep Learning (DL) models: Recurrent Neural Network (RNN), Long Short-Term Memory (LSTM), Gated Recurrent Unit (GRU) in conjunction with three meta-learners Logistic Regression (LR), Random Forest (RF), and Support Vector Machine (SVM) in order to enhance model’s performance for predicting Arabic sentiment analysis.

Enhanced Arabic Sentiment Analysis Using a Novel Stacking Ensemble of Hybrid and Deep Learning Models

A stacking ensemble model that combined the prediction power of CNN and hybrid deep learning models to predict Arabic sentiment accurately is proposed and it is discovered that the proposed deep stacking model achieved the best performance compared to the previous models.

Mining User’s Opinions and Emojis For Reputation Generation Using Deep Learning

  • Achraf BoumhidiE. Nfaoui
  • Computer Science
    2020 Fourth International Conference On Intelligent Computing in Data Sciences (ICDS)
  • 2020
Experimental results conducted on two twitter datasets about different products show that the proposed approach provides the nearest reputation value to the ground truth, which is weighted average vote for each of the authors' products provided by (IMDB and Yelp), which implies thatThe proposed approach can be applied in real world applications.

Image to Bengali Caption Generation Using Deep CNN and Bidirectional Gated Recurrent Unit

A CNN and Bidirectional GRU architecture is proposed for producing a natural language caption from an image in the Bengali language and Bangladeshi people may use this study to grasp one another better and crack language barriers and increase their cultural understanding.
...

References

SHOWING 1-10 OF 23 REFERENCES

Gated Recurrent Unit (GRU) for Emotion Classification from Noisy Speech

Experiments conducted with speech compounded with eight different types of noises reveal that GRU incurs an 18.16% smaller run-time while performing quite comparably to the Long Short-Term Memory (LSTM), which is the most popular Recurrent Neural Network proposed to date.

An Empirical Exploration of Recurrent Network Architectures

It is found that adding a bias of 1 to the LSTM's forget gate closes the gap between the L STM and the recently-introduced Gated Recurrent Unit (GRU) on some but not all tasks.

A Recurrent Neural Network Based Recommendation System

This paper evaluates the performance of ten different recurrent neural network (RNN) structure on the task of generating recommendations using written reviews and develops and test the recommendation systems using the data provided by Yelp Data Challenge.

Are implicit motives revealed in mere words? Testing the marker-word hypothesis with computer-based text analysis

The present research tested the marker-word hypothesis, which states that implicit motives are reflected in the frequencies of specific words, and demonstrated LIWC-based motive scores' causal validity by documenting their sensitivity to motive arousal.

Empirical Evaluation of Gated Recurrent Neural Networks on Sequence Modeling

These advanced recurrent units that implement a gating mechanism, such as a long short-term memory (LSTM) unit and a recently proposed gated recurrent unit (GRU), are found to be comparable to LSTM.

Long Short-Term Memory

A novel, efficient, gradient based method called long short-term memory (LSTM) is introduced, which can learn to bridge minimal time lags in excess of 1000 discrete-time steps by enforcing constant error flow through constant error carousels within special units.

Deep Speech 2 : End-to-End Speech Recognition in English and Mandarin

It is shown that an end-to-end deep learning approach can be used to recognize either English or Mandarin Chinese speech-two vastly different languages, and is competitive with the transcription of human workers when benchmarked on standard datasets.

An Examination of Interrater Reliability for Scoring the Rorschach Comprehensive System in Eight Data Sets

Reliability findings from this study closely match the results derived from a synthesis of prior research, CS summary scores are more reliable than scores assigned to individual responses, small samples are more likely to generate unstable and lower reliability estimates, and Meyer's (1997a) procedures for estimating response segment reliability were accurate.

Intraclass correlations: uses in assessing rater reliability.

Reliability coefficients often take the form of intraclass correlation coefficients. In this article, guidelines are given for choosing among six different forms of the intraclass correlation for

Hoffen und Furcht in der Leistungsmotivation [Hope and Fear Components of Achievement Motivation

  • Meisenheim: Glan.
  • 1963