Qiufeng Lin

Learn More
This paper evaluates the combination of two methods for adapting bipedal locomotion to explore virtual environments displayed on head-mounted displays (HMDs) within the confines of limited tracking spaces. We combine a method of changing the optic flow of locomotion, effectively scaling the translational gain, with a method of intervening and manipulating a(More)
We explore whether a gender-matched, calibrated self-avatar affects the perception of the affordance of stepping off of a ledge, or visual cliff, in an immersive virtual environment. Visual cliffs form demonstrations in many immersive virtual environments because they create compelling environments. Understanding the role that self-avatars contribute to(More)
We conducted four experiments on egocentric depth perception using blind walking with a restricted scanning method in both the real and a virtual environment. Our viewing condition in all experiments was monocular. We varied the field of view (real), scan direction (real), blind walking method (real and virtual), and self-representation (virtual) over(More)
The purpose of this study was to learn if self-avatars influence people's perception and action in virtual environments. People viewed two situations in a virtual environment through a head-mounted display and were asked to decide how they would act. In one situation their task was to imagine walking across a room which was divided by a horizontal bar. The(More)
The trend in immersive virtual environments (VEs) is to include the users in a more active role by having them interact with the environment and objects within the environment. Studying action and perception in VEs, then, becomes an increasingly interesting and important topic to study. We chose to study a user's ability to judge errors in self-produced(More)
Immersive virtual environments use a stereoscopic head-mounted display and data glove to create high fidelity virtual experiences in which users can interact with three-dimensional models and perceive relationships at their true scale. This stands in stark contrast to traditional PACS-based infrastructure in which images are viewed as stacks of(More)
People judge what they can and cannot do all the time when acting in the physical world. Can I step over that fence or do I need to duck under it? Can I step off of that ledge or do I need to climb off of it? These qualities of the environment that people perceive that allow them to act are called affordances. This article compares(More)
and Markus Leyrer for help with programming the experiments and data collection. with planning, writing, analyzing and reviewing has been greatly appreciated. I would also like to thank my two advisers, Dr. Betty Mohler and Dr. Bobby Bodenheimer, without whom this work would not have been possible. Betty took me on as an intern for three months in Tübingen,(More)
This paper presents a mixed reality system for combining real robots, humans, and virtual robots. The system tracks and controls physical robots in local physical space, and inserts them into a virtual environment (VE). The system allows a human to locomote in a VE larger than the physically tracked space of the laboratory through a form of redirected(More)
We conducted a followup experiment to the work of Lin et al. [2011]. The experimental protocol was the same as that of Experiment Four in Lin et al. [2011] except the viewing condition was binocular instead of monocular. In that work there was no distance underestimation, as has been widely reported elsewhere, and we were motivated in this experiment to see(More)
  • 1