Learn More
In this work, we present a method of “Walking In Place” (WIP) on the Nintendo Wii Fit Balance Board to explore a virtual environment. We directly compare our method to joystick locomotion and normal walking. The joystick proves inferior to physically walking and to WIP on the Wii Balance Board (WIP--Wii). Interestingly, we find that physically(More)
This experiment investigates the perceived differences in the quality of animation generated using motion capture data using a Vicon motion capture system and a Microsoft Kinect sensor. The Kinect uses a depth map camera to determine the position of users and objects within the view of the camera. Using this information, the depth map can be processed to(More)
This experiment investigates source point localization using only auditory cues. The idea is to determine how well humans can localize and reason about space using only sound cues, and to determine how different sound cues effect performance in such tasks. The findings from this line of research can be used in enhancing audio in virtual environments. This(More)
The trend in immersive virtual environments (VEs) is to include the users in a more active role by having them interact with the environment and objects within the environment. Studying action and perception in VEs, then, becomes an increasingly interesting and important topic to study. We chose to study a user's ability to judge errors in self-produced(More)
In this paper we describe a method for automatically animating interactive characters based on an existing corpus of key-framed hand-animation. The method learns separate low-dimensional embeddings for subsets of the hand-animation corresponding to different semantic labels. These embeddings use the Gaussian Process Latent Variable Model to map(More)
This work in progress paper presents a method of detecting tongue protrusion gestures by utilizing the tongue's color and texture characteristics. By taking advantage of recent advances in computer vision, the presented implementation enables real-time tongue gesture detection using only the video stream provided by a standard web camera. Tongue gesture(More)
We present a new depth from defocus method based on the assumption that a per pixel blur estimate (related with the circle of confusion), while ambiguous for a single image, behaves in a consistent way when applied over a focal stack of two or more images. This allows us to fit a simple analytical description of the circle of confusion to the different per(More)
McManus et al. [2011] studied a user's ability to judge errors in self-produced motion; more specifically, throwing. We now take the first step towards discriminating what cues subjects are using in order to make their judgments. The endpoint of the ball is one such cue; the restricted field of view (FOV) of the head mounted display (HMD) makes it difficult(More)
  • 1