Magnus Nordstrand

Learn More
i Authors in alphabetic order. ABSTRACT The aim of this paper is to present the multimodal speech corpora collected at KTH, in the framework of the European project PF-Star, and discuss some of the issues related to the analysis and implementation of human communicative and emotional visual correlates of speech in synthetic conversational agents. Two(More)
We present our current state of development regarding animated agents applicable to affective dialogue systems. A new set of tools are under development to support the creation of animated characters compatible with the MPEG-4 facial animation standard. Furthermore, we have collected a multimodal expressive speech database including video, audio and 3D(More)
OBJECTIVE To evaluate the efficacy of a decontamination station following exposure of volunteers to liquids with physical characteristics comparable to sarin and mustard gas. DESIGN Twenty-four volunteers participated in the experiment which was performed with all staff wearing personal protective equipment including respiratory protection. The clothes,(More)
We present a formalism for specifying verbal and non-verbal output from a multi-modal dialogue system. The output specification is XML-based and provides information about communicative functions of the output, without detailing the realisation of these functions. The aim is to let dialogue systems generate the same output for a wide variety of output(More)
This paper describes a method for acquiring data for facial movements to be analysed for implementation in an animated talking head. We will show preliminary data on how a number of articulatory parameters vary under the influence of expressiveness in speech and gestures. Primarily we focused on expressive gestures and emotions conveying information that is(More)
This paper reports the results of a preliminary cross-evaluation experiment run in the framework of the European research project PF-Star1, with the double aim of evaluating the possibility of exchanging FAP data between the involved sites and assessing the adequacy of the emotional facial gestures performed by talking heads. The results provide initial(More)
We present a high level formalism for specifying verbal and nonverbal output from a multimodal dialogue system. The output specification is XML-based and provides information about communicative functions of the output without detailing the realisation of these functions. The specification can be used to control an animated character that uses speech and(More)
This paper describes a method for acquiring data for facial movement analysis and implementation in an animated talking head. We will also show preliminary data on how a number of articulatory and facial parameters for some Swedish vowels vary under the influence of expressiveness in speech and gestures. Primarily we have been concerned in expressive(More)