Yoann Baveye

Learn More
Research in affective computing requires ground truth data for training and benchmarking computational models for machine-based emotion understanding. In this paper, we propose a large video database, namely LIRIS-ACCEDE, for affective content analysis and related applications, including video indexing, summarization or browsing. In contrast to existing(More)
This paper provides a description of the MediaEval 2015 “Affective Impact of Movies Task”, which is running for the fifth year, previously under the name “Violent Scenes Detection”. In this year’s task, participants are expected to create systems that automatically detect video content that depicts violence, or predict the affective impact that video(More)
To contribute to the need for emotional databases and affective tagging, the LIRIS-ACCEDE is proposed in this paper. LIRIS-ACCEDE is an Annotated Creative Commons Emotional DatabasE composed of 9800 video clips extracted from 160 movies shared under Creative Commons licenses. It allows to make this database publicly available without copyright issues. The(More)
Recently, mainly due to the advances of deep learning, the performances in scene and object recognition have been progressing intensively. On the other hand, more subjective recognition tasks, such as emotion prediction, stagnate at moderate levels. In such context, is it possible to make affective computational models benefit from the breakthroughs in deep(More)
On one hand, the fact that Galvanic Skin Response (GSR) is highly correlated with the user affective arousal provides the possibility to apply GSR in emotion detection. On the other hand, temporal correlation of real-time GSR and self-assessment of arousal has not been well studied. This paper confronts two modalities representing the induced emotion when(More)
This paper provides a description of the MediaEval 2016 ”Emotional Impact of Movies” task. It continues builds on previous years’ editions of the Affect in Multimedia Task: Violent Scenes Detection. However, in this year’s task, participants are expected to create systems that automatically predict the emotional impact that video content will have on(More)
Automatic prediction of emotions requires reliably annotated data which can be achieved using scoring or pairwise ranking. But can we predict an emotional score using a ranking-based annotation approach? In this paper, we propose to answer this question by describing a regression analysis to map crowdsourced rankings into affective scores in the induced(More)