Learn More
With recent advances in mobile computing, the demand for visual localization or landmark identification on mobile devices is gaining interest. We advance the state of the art in this area by fusing two popular representations of street-level image data—facade-aligned and viewpoint-aligned— and show that they contain complementary information that can be(More)
We survey popular data sets used in computer vision literature and point out their limitations for mobile visual search applications. To overcome many of the limitations, we propose the Stanford Mobile Visual Search data set. The data set contains camera-phone images of products, CDs, books, outdoor landmarks, business cards, text documents, museum(More)
We present an automatic approach to window and façade detection from LiDAR (Light Detection And Ranging) data collected from a moving vehicle along streets in urban environments. The proposed method combines bottom-up with top-down strategies to extract façade planes from noisy LiDAR point clouds. The window detection is achieved through a(More)
This paper presents a novel method to process large scale, ground level Light Detection and Ranging (LIDAR) data to automatically detect geo-referenced navigation attributes (traffic signs and lane markings) corresponding to a collection travel path. A mobile data collection device is introduced. Both the intensity of the LIDAR light return and 3-D(More)
This paper presents an application of using geo-data from Lidar and GPS/IMU to detect overpasses. Three characteristics make it a good example of processing large point sets in real-time: a stream paradigm that exploits system parallelism and memory-access coherence, a multi-level detection strategy that distributes computational burden across different(More)
We present a novel method to upsample mobile LiDAR data using panoramic images collected in urban environments. Our method differs from existing methods in the following aspects: First, we consider point visibility with respect to a given viewpoint, and use only visible points for interpolation; second, we present a multi-resolution depth map based(More)
We introduce a new class of mobile augmented reality navigation applications that allow people to interact with transit maps in public transit stations and vehicles. Our system consists of a database of coded transit maps, a vision engine for recognizing and tracking planar objects, and a graphics engine to overlay relevant real-time navigation information,(More)
  • 1