Learn More
We describe an implemented system which <italic>automatically</italic> generates and animates conversations between multiple human-like agents with appropriate and synchronized speech, intonation, facial expressions, and hand gestures. Conversation is created by a dialogue planner that produces the text as well as the intonation of the utterances. The(More)
This paper describes an international effort to unify a multimodal behavior generation framework for Embodied Conversational Agents (ECAs). We propose a three stage model we call SAIBA where the stages represent intent planning, behavior planning and behavior realization. A Function Markup Language (FML), describing intent without referring to physical(More)
Since the beginning of the SAIBA effort to unify key interfaces in the multi-modal behavior generation process, the Behavior Markup Language (BML) has both gained ground as an important component in many projects worldwide, and continues to undergo further refinement. This paper reports on the progress made in the last year in further developing BML. It(More)
This paper describes the results of a research project aimed at implementing a 'realistic' 3D Embodied Agent that can be animated in real-time and is 'believable and expressive': that is, able to communicate with coherency complex information, through the combination and the tight synchronisation of verbal and nonverbal signals. We describe, in particular,(More)
Social Signal Processing is the research domain aimed at bridging the social intelligence gap between humans and machines. This paper is the first survey of the domain that jointly considers its three major aspects, namely, modeling, analysis, and synthesis of social behavior. Modeling investigates laws and principles underlying social interaction, analysis(More)
In this paper, we present a 3D facial model compliant with MPEG-4 specifications; our aim was the realization of an animated model able to simulate in a rapid and believable manner the dynamics aspect of the human face. We have realized a Simple Facial Animation Engine (SFAE) where the 3D proprietary facial model has the look of a young woman: " Greta".(More)
This paper reports results from a program that produces high quality animation of facial expressions and head movements as automatically as possible in conjunction with meaning-based speech synthesis, including spoken intonation. The goal of the research is as much to test and define our theories of the formal semantics for such gestures, as to produce(More)
Until now theories of the gesture-speech relationship have been difficult to evaluate because of their descriptive basis. In this paper we provide a tool for investigating the relationship between speech and gesture: a system that generates speech, intonation, and gesture using two copies of an identical program that have different knowledge of the world(More)
1 Abstract In this chapter we present the problems and issues involved in the creation of Embodied Conversational Agents (ECAs). These agents may have a humanoid aspect and may be embedded in a user interface with the capacity to interact with the user; that is they are able to perceive and understand what the user is saying, but also to answer verbally and(More)