Hüseyin Çakmak

Learn More
In this paper we present a complete interactive system enabled to detect human laughs and respond appropriately, by integrating the information of the human behavior and the context. Furthermore, the impact of our autonomous laughter-aware agent on the humor experience of the user and interaction between user and agent is evaluated by subjective and(More)
—In this paper, automatic phonetic transcription of laughter is achieved with the help of Hidden Markov Models (HMMs). The models are evaluated in a speaker-independent way. Several measures to evaluate the quality of the transcriptions are discussed, some focusing on the recognized sequences (without paying attention to the segmentation of the phones),(More)
In this paper we apply speaker-dependent training of Hidden Markov Models (HMMs) to audio and visual laughter synthesis separately. The two modalities are synthesized with a forced durations approach and are then combined together to render audiovisual laughter on a 3D avatar. This paper fo-cuses on visual synthesis of laughter and its perceptive evaluation(More)
A synchronous database of acoustic and 3D facial marker data was built for audiovisual laughter synthesis. Since the aim is to use this database for HMM-based modeling and synthesis, the amount of collected data from one given subject had to be maximized. The corpus contains 251 utterances of laughter from one male participant. Laughter was elicited with(More)
In this paper, we focus on the development of new methods to detect and analyze laughter, in order to enhance human-computer interactions. First, the general architecture of such a laughter-enabled application is presented. Then, we propose the use of two new modalities , namely body movements and respiration, to enrich the audiovisual laughter detection(More)
In this paper we propose synchronization rules between acoustic and visual laughter synthesis systems. This work follows up our previous studies on acoustics laughter synthesis and visual laughter synthesis. The need of synchronization rules comes from the constraint that in laughter, HMM-based synthesis of laughter cannot be performed using a unified(More)
This paper presents the results of our participation to the ninth eNTERFACE workshop on multimodal user interfaces. Our target for this workshop was to bring some technologies currently used in speech recognition and synthesis to a new level, i.e. being the core of a new HMM-based mapping system. The idea of statistical mapping has been investigated, more(More)