Yelin Kim

Learn More
Automatic emotion recognition systems predict high-level affective content from low-level human-centered signal cues. These systems have seen great improvements in classification accuracy, due in part to advances in feature selection methods. However, many of these feature selection methods capture only linear relationships between features or alternatively(More)
Facial movement is modulated both by emotion and speech articulation. Facial emotion recognition systems aim to discriminate between emotions, while reducing the speech-related variability in facial cues. This aim is often achieved using two key features: (1) phoneme segmentation: facial cues are temporally divided into units with a single phoneme and (2)(More)
Human emotion changes continuously and sequentially. This results in dynamics intrinsic to affective communication. One of the goals of automatic emotion recognition research is to computation-ally represent and analyze these dynamic patterns. In this work, we focus on the global utterance-level dynamics. We are motivated by the hypothesis that global(More)
— We propose a temporal segmentation and classification method that accounts for transition patterns between events of interest. We apply this method to automatically detect salient human action events from videos. A discriminative clas-sifier (e.g., Support Vector Machine) is used to recognize human action events and an efficient dynamic programming(More)
The need for human-centered, affective multimedia interfaces has motivated research in automatic emotion recognition. In this article, we focus on facial emotion recognition. Specifically, we target a domain in which speakers produce emotional facial expressions while speaking. The main challenge of this domain is the presence of modulations due to both(More)
—Human expressions are often ambiguous and unclear , resulting in disagreement or confusion among different human evaluators. In this paper, we investigate how audiovisual emotion recognition systems can leverage prototypicality, the level of agreement or confusion among human evaluators. We propose the use of a weighted Support Vector Machine to explicitly(More)
Research has demonstrated that humans require different amounts of information, over time, to accurately perceive emotion expressions. This varies as a function of emotion classes. For example, recognition of happiness requires a longer stimulus than recognition of anger. However, previous automatic emotion recognition systems have often overlooked these(More)
Automatic emotion recognition from audio-visual data is a topic that has been broadly explored using data captured in the laboratory. However, these data are not necessarily representative of how emotion is manifested in the real-world. In this paper, we describe our system for the 2016 Emotion Recognition in the Wild challenge. We use the Acted Facial(More)
—My PhD work aims at developing computational methodologies for automatic emotion recognition from audiovisual behavioral data. A main challenge in automatic emotion recognition is that human behavioral data are highly complex, due to multiple sources that vary and modulate behaviors. My goal is to provide computational frameworks for understanding and(More)
  • 1