Real Time Neural Network-based Face Tracker for VR Displays

Abstract

Tracking technology for Virtual Reality (VR) applications typically requires the user to wear head-mounted sensors with transmitters or wires. This paper describes a video-based, realtime, low-latency, high-precision 3D face tracker specifically designed for VR displays that requires no sensors, markers, transmitters, or wires to be worn. One center camera finds the 2D face position using Artificial Neural Networks (NN) and recognizes and tracks upright, tilted, frontal and non-frontal faces within visually cluttered environments. Two more left and right (L/R) cameras obtain the 3D head coordinates using a standard stereopsis technique. This paper presents a novel idea to train the NN to a new face in less than two minutes, and includes background training to avoid recognition of false positives. The system utilizes diffuse infrared (IR) illumination to avoid computing time-consuming image normalization, to reduce illumination variations caused by the physical surroundings, and to prevent visible illumination from interfering with the user’s field of view. CR categories: I.5.1 [Computer Methodologies]: Pattern Recognition⎯Models⎯Neural nets; I.4.8 [Image Processing and Computer Vision]: Scene Analysis—Tracking; I.3.7 [Computer Graphics]: 3D Graphics and Realism−virtual reality;

5 Figures and Tables

Cite this paper

@inproceedings{Girado2007RealTN, title={Real Time Neural Network-based Face Tracker for VR Displays}, author={Javier Girado and Tom Peterka and Robert Kooima and Jinghua Ge and Daniel J. Sandin and Andrew E. Johnson and Jason Leigh and Thomas A. DeFanti}, year={2007} }