Peter Rander

Learn More
Just as optical flow is the two-dimensional motion of points in an image, scene flow is the three-dimensional motion of points in the world. The fundamental difficulty with optical flow is that only the normal flow can be computed directly from the image measurements, without some form of smoothing or regularization. In this paper, we begin by showing that(More)
medium, Virtualized Reality, immerses viewers in a virtual reconstruction of real-world events. The Virtualized Reality world model consists of real images and depth information computed from these images. Stereoscopic reconstructions provide a sense of complete immersion, and users can select their own viewpoints at view time, independent of the actual(More)
The DARPA PerceptOR program implements a rigorous evaluative test program which fosters the development of field relevant outdoor mobile robots. Autonomous ground vehicles are deployed on diverse test courses throughout the USA and quantitatively evaluated on such factors as autonomy level, waypoint acquisition, failure rate, speed, and communications(More)
Virtualized Reality creates a model of time-varying real-world events from image sequences. The model can be used for manipulating and combining these events, and for rendering new virtual images. In this paper, we present two recent enhancements to Virtualized Reality. We present Model Enhanced Stereo (MES) as a method to use widely separated images to(More)
Despite significant progress in automatic recovery of static scene structure from range images, little effort has been made toward extending these approaches to dynamic scenes. This disparity is in large part due to the lack of range sensors with the high sampling rates needed to accurately capture dynamic scenes. We have developed a system that overcomes(More)
Described is the development and testing of a robust vegetation detector for mobile robot navigation. A multispectral sensor was created out of a near-infrared and a visible light video camera. Vegetation was then detected by subtracting each pixel in the red channel of the visible-light image from the corresponding pixel in the near-infrared image and(More)
We present a vision-based mapping and localization system for operations in pipes such as those found in Liquified Natural Gas (LNG) production. A forward facing fisheye camera mounted on a prototype robot collects imagery as it is teleoperated through a pipe network. The images are processed offline to estimate camera pose and sparse scene structure where(More)