Learn More
Automated analysis of human affective behavior has attracted increasing attention from researchers in psychology, computer science, linguistics, neuroscience, and related disciplines. Promising approaches have been reported, including automatic methods for facial and vocal affect recognition. However, the existing methods typically handle only deliberately(More)
—Advances in computer processing power and emerging algorithms are allowing new ways of envisioning Human Computer Interaction. Although the benefit of audiovisual fusion is expected for affect recognition from the psychological and engineering perspectives, most of existing approaches to automatic human affect analysis are uni-modal: information processed(More)
The existing methods of facial expression recognition are typically based on the near-frontal face data. The analysis of non-frontal-view facial expression is a largely unexplored research. The accessibility to a recent 3D facial expression database (BU-3DFE database) motivates us to explore an interesting question: whether non-frontal-view facial(More)
— This paper presents a new, compact, canonical representation for arithmetic expressions, called Taylor Expansion Diagram. It can be used to facilitate the verification of RTL specifications and hardware implementations of arithmetic designs, and specifically the equivalence checking of complex algebraic and arithmetic expressions that arise in symbolic(More)
— Change in a speaker's emotion is a fundamental component in human communication. Automatic recognition of spontaneous emotion would significantly impact human-computer interaction and emotion-related studies in education, psychology and psychiatry. In this paper, we explore methods for detecting emotional facial expressions occurring in a realistic human(More)
This paper presents a new, compact, canonicalgraph-based representation, called Taylor Expansion Diagrams(TEDs). It is based on a general non-binary decompositionprinciple using Taylor series expansion. It can be exploitedto facilitate the verification of high-level (RTL) designdescriptions. We present the theory behind TEDs, commentupon its canonicity(More)
Automatic multimodal recognition of spontaneous emotional expressions is a largely unexplored and challenging problem. In this paper, we explore audiovisual emotion recognition in a realistic human conversation setting—the Adult Attachment Interview (AAI). Based on the assumption that facial expression and vocal expression are at the same coarse affective(More)
—The ability of a computer to detect and appropriately respond to changes in a user's affective state has significant implications to Human–Computer Interaction (HCI). In this paper, we present our efforts toward audio–visual affect recognition on 11 affective states customized for HCI application (four cognitive/motivational and seven basic affective(More)