Learn More
A comprehensive survey of computer vision-based human motion capture literature from the past two decades is presented. The focus is on a general overview based on a taxonomy of system functionalities, broken down into four processes: initial-ization, tracking, pose estimation, and recognition. Each process is discussed and divided into subprocesses and /or(More)
In model-based computer vision it is necessary to have a geometric model of the object the pose of which is being estimated. In this paper a very compact model of the shoulder complex and arm is presented. First an investigation of the anatomy of the arm and the shoulder is conducted to identify the primary joints and degrees of freedom. To model the(More)
Many disciplines of multimedia and communication go towards ubiquitous computing and hands free-or no-touch interaction with computers. Application domains in this direction involve virtual reality, augmented reality, wearable computing, and smart spaces. Gesturing is one means of interaction and this paper presents some important issues in gesture(More)
For navigation in a partially known environment it is possible to provide a model that may e used or guidance in the navigation and as a basis for selective sensing. In this paper a navigation system for an autonomous mobile robot is presented. Both navigation and sensing is build around a graphics model, which enables prediction of the expected scene(More)
In the last decade speech processing has been applied in commercially available products. One of the key reasons for its success is the identification and use of an underlying set of generic symbols (phonemes) constituting all speech. In this work we follow the same approach, but for the problem of human body gestures. That is, the topic of this paper is(More)
This paper describes the development of a natural interface to a virtual environment. The interface is through a natural pointing gesture and replaces pointing devices which are normally used to interact with virtual environments. The pointing gesture is estimated in 3D using kinematic knowledge of the arm during pointing and monocular computer vision. The(More)