Xianshi Xie

Learn More
This paper evaluates the combination of two methods for adapting bipedal locomotion to explore virtual environments displayed on head-mounted displays (HMDs) within the confines of limited tracking spaces. We combine a method of changing the optic flow of locomotion, effectively scaling the translational gain, with a method of intervening and manipulating a(More)
We conducted four experiments on egocentric depth perception using blind walking with a restricted scanning method in both the real and a virtual environment. Our viewing condition in all experiments was monocular. We varied the field of view (real), scan direction (real), blind walking method (real and virtual), and self-representation (virtual) over(More)
This experiment investigates the spatial memory and attention when human acts as supervisor of one or two groups of distributed robot teams in a large virtual environment (VE). The problem is similar to learning a new environment and interpreting its spatial structure, e.g., [Mou and McNamara 2002], but less is known when attention is divided or locomotion(More)
This paper presents a mixed reality system for combining real robots, humans, and virtual robots. The system tracks and controls physical robots in local physical space, and inserts them into a virtual environment (VE). The system allows a human to locomote in a VE larger than the physically tracked space of the laboratory through a form of redirected(More)
We conducted a followup experiment to the work of Lin et al. [2011]. The experimental protocol was the same as that of Experiment Four in Lin et al. [2011] except the viewing condition was binocular instead of monocular. In that work there was no distance underestimation, as has been widely reported elsewhere, and we were motivated in this experiment to see(More)
  • 1