Corpus ID: 225039938

AttendAffectNet: Self-Attention based Networks for Predicting Affective Responses from Movies

  title={AttendAffectNet: Self-Attention based Networks for Predicting Affective Responses from Movies},
  author={Ha Thi Phuong Thao and Balamurali B.T. and Dorien Herremans and Gemma Roig},
  • Ha Thi Phuong Thao, Balamurali B.T., +1 author Gemma Roig
  • Published 2020
  • Computer Science, Engineering
  • ArXiv
  • In this work, we propose different variants of the self-attention based network for emotion prediction from movies, which we call AttendAffectNet. We take both audio and video into account and incorporate the relation among multiple modalities by applying self-attention mechanism in a novel manner into the extracted features for emotion prediction. We compare it to the typically temporal integration of the self-attention based model, which in our case, allows to capture the relation of temporal… CONTINUE READING

    Figures and Tables from this paper


    Multimodal Deep Models for Predicting Affective Responses Evoked by Movies
    • 2
    • PDF
    EmoNets: Multimodal deep learning approaches for emotion recognition in video
    • 245
    • PDF
    Multi-modal learning for affective content analysis in movies
    • 7
    A multimodal mixture-of-experts model for dynamic emotion prediction in movies
    • 17
    • PDF
    THU-HCSI at MediaEval 2016: Emotional Impact of Movies Task
    • 14
    • PDF
    LIRIS-ACCEDE: A Video Database for Affective Content Analysis
    • 147
    • PDF
    Regression-based Music Emotion Prediction using Triplet Neural Networks
    • 1
    • PDF