Larry H. Matthies

Learn More
Using known camera motion to estimate depth from image sequences is an important problem in robot vision. Many applications of depth-from-motion, including navigation and manipulation, require algorithms that can estimate depth in an on-line, incremental fashion. This requires a representation that records the uncertainty in depth estimates and a mechanism(More)
NASA’s two Mars Exploration Rovers (MER) have successfully demonstrated a robotic Visual Odometry capability on another world for the first time. This provides each rover with accurate knowledge of its position, which allows it to autonomously detect and compensate for any unforeseen slip encountered during a drive. It has enabled the rovers to drive safely(More)
on scalar models of measurement error in triangulation. Using t h m-dimensional (3D) Gaussian distributions to model triangulation error is shown lo lead lo much better performance. How to compute the error model from image correspondences. estimate robot motion between frames, and update the global positions of the robot and the hndm8rks over time are(More)
In this paper, we present a new feature representation for first-person videos. In first-person video understanding (e.g., activity recognition), it is very important to capture both entire scene dynamics (i.e., egomotion) and salient local motion observed in videos. We describe a representation framework based on time series pooling, which is designed to(More)
Autonomous navigation in cross-country environments presents many new challenges with respect to more traditional, urban environments. The lack of highly structured components in the scene complicates the design of even basic functionalities such as obstacle detection. In addition to the geometric description of the scene, terrain typing is also an(More)
This paper discusses the problem of recognizing interaction-level human activities from a first-person viewpoint. The goal is to enable an observer (e.g., a robot or a wearable camera) to understand 'what activity others are performing to it' from continuous video inputs. These include friendly interactions such as 'a person hugging the observer' as well as(More)
In this paper, we present the vision-aided inertial navigation (VISINAV) algorithm that enables precision planetary landing. The vision front-end of the VISINAV system extracts 2-D-to-3-D correspondences between descent images and a surface map (mapped landmarks), as well as 2-D-to-2-D feature tracks through a sequence of descent images (opportunistic(More)
NASA scenarios for lunar and planetary missions include robotic vehicles that function in both teleoperated and semi-autonomous modes. Under teleoperation, on-board stereo cameras may provide 3-D scene information to human operators via stereographic displays; likewise, under semi-autonomy, machine stereo vision may provide 3-D information for obstacle(More)
Robust navigation for mobile robots over long distances requires an accurate method for tracking the robot position in the environment. Promising techniques for position estimation by determining the camera ego-motion from monocular or stereo sequences have been previously described. However, long-distance navigation requires both a high level of robustness(More)