An Efficient Synthetic Vision System for 3D Multi-character Systems

Abstract

This paper deals with the problem of sensing virtual environments for 3D intelligent multi-character simulations. As these creatures should display reactive skills (navigation or gazing), together with the necessary planning processes, required to animate their behaviours, we present an efficient and fully scalable sensor system designed to provide this information to different kinds of 3D embodied agents (games, storytelling, etc). Inspired in Latombe’s vision system [5], as recently presented by Peters [1], we avoid the second rendering mechanism, looking for the necessary efficiency, and we introduce a fully scalable communication protocol, based on XML labelling techniques, to let the agent handle the communication flow within its 3D environment (sense + act). The synthetic sensor system presented has been tested with two plausible local navigation formalisms (Neural Networks and Rule based System), whose models and results have been also reported.

DOI: 10.1007/978-3-540-39396-2_60

Extracted Key Phrases

3 Figures and Tables

Cite this paper

@inproceedings{Lozano2003AnES, title={An Efficient Synthetic Vision System for 3D Multi-character Systems}, author={Miguel Lozano and Rafael Lucia and Fernando Barber and Francisco Grimaldo and Ant{\'o}nio Lucas Soares and Alicia Forn{\'e}s}, booktitle={IVA}, year={2003} }