• Publications
  • Influence
Multimodal Language Analysis in the Wild: CMU-MOSEI Dataset and Interpretable Dynamic Fusion Graph
TLDR
This paper introduces CMU Multimodal Opinion Sentiment and Emotion Intensity (CMU-MOSEI), the largest dataset of sentiment analysis and emotion recognition to date and uses a novel multimodal fusion technique called the Dynamic Fusion Graph (DFG), which is highly interpretable and achieves competative performance when compared to the previous state of the art. Expand
Tensor Fusion Network for Multimodal Sentiment Analysis
TLDR
A novel model, termed Tensor Fusion Networks, is introduced, which learns intra-modality and inter- modality dynamics end-to-end in sentiment analysis and outperforms state-of-the-art approaches for both multimodal and unimodal sentiment analysis. Expand
Context-Dependent Sentiment Analysis in User-Generated Videos
TLDR
A LSTM-based model is proposed that enables utterances to capture contextual information from their surroundings in the same video, thus aiding the classification process and showing 5-10% performance improvement over the state of the art and high robustness to generalizability. Expand
Memory Fusion Network for Multi-view Sequential Learning
TLDR
A new neural architecture for multi-view sequential learning called the Memory Fusion Network (MFN) that explicitly accounts for both interactions in a neural architecture and continuously models them through time. Expand
Recent Trends in Deep Learning Based Natural Language Processing [Review Article]
TLDR
This paper reviews significant deep learning related models and methods that have been employed for numerous NLP tasks and provides a walk-through of their evolution. Expand
MELD: A Multimodal Multi-Party Dataset for Emotion Recognition in Conversations
TLDR
The Multimodal EmotionLines Dataset (MELD), an extension and enhancement of Emotion lines, contains about 13,000 utterances from 1,433 dialogues from the TV-series Friends and shows the importance of contextual and multimodal information for emotion recognition in conversations. Expand
SenticNet 3: A Common and Common-Sense Knowledge Base for Cognition-Driven Sentiment Analysis
TLDR
SenticNet 3 models nuanced semantics and sentics (that is, the conceptual and affective information associated with multi-word natural language expressions), representing information with a symbolic opacity of an intermediate nature between that of neural networks and typical symbolic systems. Expand
New Avenues in Opinion Mining and Sentiment Analysis
TLDR
The history, current use, and future of opinion mining and sentiment analysis are discussed, along with relevant techniques and tools. Expand
DialogueRNN: An Attentive RNN for Emotion Detection in Conversations
TLDR
A new method based on recurrent neural networks that keeps track of the individual party states throughout the conversation and uses this information for emotion classification and outperforms the state of the art by a significant margin on two different datasets. Expand
A review of affective computing: From unimodal analysis to multimodal fusion
TLDR
This first of its kind, comprehensive literature review of the diverse field of affective computing focuses mainly on the use of audio, visual and text information for multimodal affect analysis, and outlines existing methods for fusing information from different modalities. Expand
...
1
2
3
4
5
...