Supporting video conference communication using a vision-based human facial synthesis approach
From an early age, people display the ability to quickly and effortlessly interpret the orientation and movement of human body parts, thereby allowing one to infer the intentions of others who are nearby and to comprehend an important nonverbal form of communication. The ease with which one accomplishes this task belies the difficulty of a problem that has challenged computational systems for decades, human motion analysis. Technological developments over years have resulted into many systems for measuring body segment positions and angles between segments. In these systems human body is typically considered as a system of rigid links connected by joints. The motion is estimated by the use of measurements from mechanical, optical, magnetic, or inertial trackers. Among all kinds of sensors, optical sensing encompasses a large and varying collection of technologies. In a computer vision context, human motion analysis is a topic that studies methods and applications in which two or more consecutive images from an image sequences, e.g. captured by a video camera, are processed to produce information based on the apparent human body motion in the images. Many different disciplines employ motion analysis systems to capture movement and posture of human body for applications such as medical diagnostics, virtual reality, human-computer interaction etc. This thesis gives an insight into the state of the art human motion analysis systems, and provides new methods for capturing human motion.