MELD: A Multimodal Multi-Party Dataset for Emotion Recognition in Conversations
- Soujanya Poria, Devamanyu Hazarika, Navonil Majumder, Gautam Naik, E. Cambria, Rada Mihalcea
- Computer ScienceAnnual Meeting of the Association for…
- 5 October 2018
The Multimodal EmotionLines Dataset (MELD), an extension and enhancement of Emotion lines, contains about 13,000 utterances from 1,433 dialogues from the TV-series Friends and shows the importance of contextual and multimodal information for emotion recognition in conversations.
DialogueRNN: An Attentive RNN for Emotion Detection in Conversations
- Navonil Majumder, Soujanya Poria, Devamanyu Hazarika, Rada Mihalcea, Alexander Gelbukh, E. Cambria
- Computer ScienceAAAI Conference on Artificial Intelligence
- 1 November 2018
A new method based on recurrent neural networks that keeps track of the individual party states throughout the conversation and uses this information for emotion classification and outperforms the state of the art by a significant margin on two different datasets.
Automatic Detection of Fake News
- Verónica Pérez-Rosas, Bennett Kleinberg, Alexandra Lefevre, Rada Mihalcea
- Computer ScienceInternational Conference on Computational…
- 23 August 2017
This paper introduces two novel datasets for the task of fake news detection, covering seven different news domains, and conducts a set of learning experiments to build accurate fake news detectors that can achieve accuracies of up to 76%.
ICON: Interactive Conversational Memory Network for Multimodal Emotion Detection
- Devamanyu Hazarika, Soujanya Poria, Rada Mihalcea, E. Cambria, Roger Zimmermann
- Computer Science, PsychologyConference on Empirical Methods in Natural…
- 2018
Interactive COnversational memory Network (ICON), a multi-modal emotion detection framework that extracts multimodal features from conversational videos and hierarchically models the self- and inter-speaker emotional influences into global memories to aid in predicting the emotional orientation of utterance-videos.
Emotion Recognition in Conversation: Research Challenges, Datasets, and Recent Advances
- Soujanya Poria, Navonil Majumder, Rada Mihalcea, E. Hovy
- Computer ScienceIEEE Access
- 8 May 2019
These challenges in ERC are discussed, the drawbacks of these approaches are described, and the reasons why they fail to successfully overcome the research challenges are discussed.
COSMIC: COmmonSense knowledge for eMotion Identification in Conversations
- Deepanway Ghosal, Navonil Majumder, Alexander Gelbukh, Rada Mihalcea, Soujanya Poria
- Computer ScienceFindings
- 6 October 2020
This paper proposes COSMIC, a new framework that incorporates different elements of commonsense such as mental states, events, and causal relations, and build upon them to learn interactions between interlocutors participating in a conversation.
Deception Detection using Real-life Trial Data
- Verónica Pérez-Rosas, M. Abouelenien, Rada Mihalcea, Mihai Burzo
- Computer ScienceInternational Conference on Multimodal…
- 9 November 2015
A novel dataset consisting of videos collected from public court trials is introduced and the use of verbal and non-verbal modalities are explored to build a multimodal deception detection system that aims to discriminate between truthful and deceptive statements provided by defendants and witnesses.
CASCADE: Contextual Sarcasm Detection in Online Discussion Forums
- Devamanyu Hazarika, Soujanya Poria, Sruthi Gorantla, E. Cambria, Roger Zimmermann, Rada Mihalcea
- Computer ScienceInternational Conference on Computational…
- 1 May 2018
This paper proposes a ContextuAl SarCasm DEtector (CASCADE), which adopts a hybrid approach of both content- and context-driven modeling for sarcasm detection in online social media discussions.
Multimodal Sentiment Analysis of Spanish Online Videos
- Verónica Pérez-Rosas, Rada Mihalcea, Louis-Philippe Morency
- Physics, Computer ScienceIEEE Intelligent Systems
- 1 May 2013
Using multimodal sentiment analysis, the presented method integrates linguistic, audio, and visual features to identify sentiment in online videos. In particular, experiments focus on a new dataset…
Utterance-Level Multimodal Sentiment Analysis
- Verónica Pérez-Rosas, Rada Mihalcea, Louis-Philippe Morency
- Computer ScienceAnnual Meeting of the Association for…
- 2013
It is shown that multimodal sentiment analysis can be effectively performed, and that the joint use of visual, acoustic, and linguistic modalities can lead to error rate reductions of up to 10.5% as compared to the best performing individual modality.
...
...