Learn More
We describe an implemented system which <italic>automatically</italic> generates and animates conversations between multiple human-like agents with appropriate and synchronized speech, intonation, facial expressions, and hand gestures. Conversation is created by a dialogue planner that produces the text as well as the intonation of the utterances. The(More)
Simulating the motion of realistic, large, dense crowds of autonomous agents is still a challenge for the computer graphics community. Typical approaches either resemble particle simulations (where agents lack orientation controls) or are conservative in the range of human motion possible (agents lack psychological state and aren't allowed to 'push' each(More)
This research proposes a computational framework for generating visual attending behavior in an embodied simulated human agent. Such behaviors directly control eye and head motions, and guide other actions such as locomotion and reach. The implementation of these concepts, referred to as the AVA, draws on empirical and qualitative observations known from(More)
Humans use gestures in most communicative acts. How are these gestures initiated and performed? What kinds of communicative roles do they play and what kinds of meanings do they convey? How do listeners extract and understand these meanings? Will it be possible to build computerized communicating agents that can extract and understand the meanings and(More)
In this paper we develop a set of inverse kinematics algorithms suitable for an anthropomorphic arm or leg. We use a combination of analytical and numerical methods to solve generalized inverse kinematics problems including position, orientation, and aiming constraints. Our combination of analytical and numerical methods results in faster and more reliable(More)
This material is posted here with permission of the IEEE. Such permission of the IEEE does not in any way imply IEEE endorsement of any of the University of Pennsylvania's products or services. Internal or personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for(More)
Human movements include limb gestures and postural attitude. Although many computer animation researchers have studied these classes of movements, procedurally generated movements still lack naturalness. We argue that looking only at the psychological notion of gesture is insufficient to capture movement qualities needed by animated charactes. We advocate(More)
This paper reports results from a program that produces high quality animation of facial expressions and head movements as automatically as possible in conjunction with meaning-based speech synthesis, including spoken intonation. The goal of the research is as much to test and define our theories of the formal semantics for such gestures, as to produce(More)
This article considers animating evacuation in complex buildings by crowds who might not know the structure's connectivity, or who find routes accidentally blocked. It takes into account simulated crowd behavior under two conditions: where agents communicate building route knowledge, and where agents take different roles such as trained personnel, leaders,(More)