Qiufeng Lin

Learn More
This paper evaluates the combination of two methods for adapting bipedal locomotion to explore virtual environments displayed on head-mounted displays (HMDs) within the confines of limited tracking spaces. We combine a method of changing the optic flow of locomotion, effectively scaling the translational gain, with a method of intervening and manipulating a(More)
We conducted four experiments on egocentric depth perception using blind walking with a restricted scanning method in both the real and a virtual environment. Our viewing condition in all experiments was monocular. We varied the field of view (real), scan direction (real), blind walking method (real and virtual), and self-representation (virtual) over(More)
The purpose of this study was to learn if self-avatars influence people's perception and action in virtual environments. People viewed two situations in a virtual environment through a head-mounted display and were asked to decide how they would act. In one situation their task was to imagine walking across a room which was divided by a horizontal bar. The(More)
We explore whether a gender-matched, calibrated self-avatar affects the perception of the affordance of stepping off of a ledge, or visual cliff, in an immersive virtual environment. Visual cliffs form demonstrations in many immersive virtual environments because they create compelling environments. Understanding the role that self-avatars contribute to(More)
People judge what they can and cannot do all the time when acting in the physical world. Can I step over that fence or do I need to duck under it? Can I step off of that ledge or do I need to climb off of it? These qualities of the environment that people perceive that allow them to act are called affordances. This article compares(More)
The trend in immersive virtual environments (VEs) is to include the users in a more active role by having them interact with the environment and objects within the environment. Studying action and perception in VEs, then, becomes an increasingly interesting and important topic to study. We chose to study a user's ability to judge errors in self-produced(More)
This paper presents a mixed reality system for combining real robots, humans, and virtual robots. The system tracks and controls physical robots in local physical space, and inserts them into a virtual environment (VE). The system allows a human to locomote in a VE larger than the physically tracked space of the laboratory through a form of redirected(More)
We conducted a followup experiment to the work of Lin et al. [2011]. The experimental protocol was the same as that of Experiment Four in Lin et al. [2011] except the viewing condition was binocular instead of monocular. In that work there was no distance underestimation, as has been widely reported elsewhere, and we were motivated in this experiment to see(More)
  • 1