Learn More
Interactive Virtual environments suffer, since their inception, from being unable to interpret gestures made by their users. Quite a few tentatives have been investigated with success but most of them are limited to a specific set of body parts like hands, arms or facial expressions. However when projecting a real participant into a virtual world to(More)
from the management of the low level finite state automata action and also from the monitoring of parameters based on the Real-time animation of virtual humans requires a dedicated architecture for the integratign of different motion control techniques running into so-called actions. In this paper we describe a software architecture called AGENTlib for the(More)
In this paper, we present an integration framework for heterogeneous motion generators. We summarise our approach for articulated agents modelling and animating them with motion generators. Then we focus on the mixing of various motion generators and propose an agent and action-oriented framework providing real-time character animation. Activity properties(More)
Most of today's virtual environments are populated with some kind of autonomous lifelike agents. Such agents follow a pre-programmed sequence of behaviours that exclude the user as a participating entity in the virtual society. In order to make inhabited virtual reality an attractive place for information exchange and social interaction, we need to equip(More)
The recognition of daily human activities is a decisive interface component for more intuitive VR interactions. In this paper, we describe a hierarchical model of body actions based on fine-grained action primitives. The associated recognition algorithm allows on-the-fly identification of simultaneous actions. Measurements highlight robustness to(More)
  • 1