Magnus Nordstrand

Learn More
The aim of this paper is to present the multimodal speech corpora collected at KTH, in the framework of the European project PF-Star, and discuss some of the issues related to the analysis and implementation of human communicative and emotional visual correlates of speech in synthetic conversational agents. Two multimodal speech corpora have been collected(More)
This paper reports the results of a preliminary cross-evaluation experiment run in the framework of the European research project PF-Star 1 , with the double aim of evaluating the possibility of exchanging FAP data between the involved sites and assessing the adequacy of the emotional facial gestures performed by talking heads. The results provide initial(More)
This paper describes a method for acquiring data for facial movements to be analysed for implementation in an animated talking head. We will show preliminary data on how a number of articulatory parameters vary under the influence of expressiveness in speech and gestures. Primarily we focused on expressive gestures and emotions conveying information that is(More)
We present our current state of development regarding animated agents applicable to affective dialogue systems. A new set of tools are under development to support the creation of animated characters compatible with the MPEG-4 facial animation standard. Furthermore, we have collected a multimodal expressive speech database including video, audio and 3D(More)
We present a formalism for specifying verbal and non-verbal output from a multi-modal dialogue system. The output specification is XML-based and provides information about communicative functions of the output, without detailing the realisation of these functions. The aim is to let dialogue systems generate the same output for a wide variety of output(More)
We present a high level formalism for specifying verbal and non-verbal output from a multimodal dialogue system. The output specification is XML-based and provides information about communicative functions of the output without detailing the realisa-tion of these functions. The specification can be used to control an animated character that uses speech and(More)
This paper describes a method for acquiring data for facial movement analysis and implementation in an animated talking head. We will also show preliminary data on how a number of articulatory and facial parameters for some Swedish vowels vary under the influence of expressiveness in speech and gestures. Primarily we have been concerned in expressive(More)
Facial expressions are a kind of nonverbal communication. They carry the state of emotion of a person. Facial expression plays an important role in face-to face human-computer communication. Automatic facial expression synthesis became popular research area nowadays. It can be used in many areas such that physiology, education, murder squad, analysis of(More)
The primary purpose of this one day workshop is to share information and engage in the collective planning for the future creation of usable multidisciplinary multimodal resources. It will focus on the following issues regarding multimodal corpora: how researchers build models of human behaviour out of the annotations of video corpora, how they use such(More)