Learn More
We present a wide-baseline image matching approach based on line segments. Line segments are clustered into local groups according to spatial proximity. Each group is treated as a feature called a Line Signature. Similar to local features, line signatures are robust to occlusion, image clutter, and viewpoint changes. The descriptor and similarity measure of(More)
The biggest single obstacle to building effective augmented reality (AR) systems is the lack of accurate wide-area sensors for trackers that report the locations and orientations of objects in an environment. Active (sensor-emitter) tracking technologies require powered-device installation, limiting their use to prepared areas that are relatively free of(More)
Natural scene features stabilize and extend the tracking range of augmented reality (AR) pose-tracking systems. We develop robust computer vision methods to detect and track natural features in video images. Point and region features are automatically and adap-tively selected for properties that lead to robust tracking. A multistage tracking algorithm(More)
An Augmented Virtual Environment (AVE) fuses dynamic imagery with 3D models. The AVE provides a unique approach to visualize and comprehend multiple streams of temporal data or images. Models are used as a 3D substrate for the visualization of temporal imagery, providing improved comprehension of scene activities. The core elements of AVE systems include(More)
Tracking, or camera pose determination, is the main technical challenge in creating augmented realities. Constraining the degree to which the environment may be altered to support tracking heightens the challenge. This paper describes several years of work at the USC Computer Graphics and Immersive Technologies (CGIT) laboratory to develop self-contained,(More)
Figure 1: Tracking and annotating an object using graph cut segmentation. A building sign on the USC campus is first detected using simple recognition, after which no additional information is needed. As the camera moves, we segment and track the sign through significant scale and orientation changes, rendering an annotation above it. This example(More)
We present a real-time hybrid tracking system that integrates gyroscopes and line-based vision tracking technology. Gyroscope measurements are used to predict orientation and image line positions. Gyroscope drift is corrected by vision tracking. System robustness is achieved by using a heuristic control system to evaluate measurement quality and select(More)
In this paper we present a novel vision-based system for automatic detection and extraction of complex road networks from various sensor resources such as aerial photographs, satellite images, and LiDAR. Uniquely, the proposed system is an integrated solution that merges the power of perceptual grouping theory (Gabor filtering, tensor voting) and optimized(More)
We presented a novel procedure to extract ground road networks from airborne LiDAR data. First point clouds were separated into ground and non-ground parts, and ground roads were to be extracted from ground planes. Then, buildings and trees were distinguished in an energy minimization framework after incorporation of two new features. The separation(More)