You Are Here: Geolocation by Embedding Maps and Images

  title={You Are Here: Geolocation by Embedding Maps and Images},
  author={Obed No{\'e} Samano Abonce and Mengjie Zhou and Andrew Calway},
We present a novel approach to geolocalising panoramic images on a 2-D cartographic map based on learning a low dimensional embedded space, which allows a comparison between an image captured at a location and local neighbourhoods of the map. The representation is not sufficiently discriminatory to allow localisation from a single image, but when concatenated along a route, localisation converges quickly, with over 90% accuracy being achieved for routes of around 200m in length when using… 
Efficient Large-Scale Semantic Visual Localization in 2D Maps
This work proposes a novel framework for semantic visual localization in city-scale environments which alleviates the aforementioned problem by using freely available 2D maps such as OpenStreetMap and evaluates its localization framework on two large-scale datasets consisting of Cambridge and San Francisco.
Leveraging an Efficient and Semantic Location Embedding to Seek New Ports of Bike Share Services
A new model, named for Efficient and Semantic Location Embedding (ESLE)1, which carries both geospatial and semantic information of the geo-locations and is not only much cheaper in computation, but also easier to interpret via a systematic semantic analysis.
Efficient Localisation Using Images and OpenStreetMaps
The ability to localise is key for robot navigation. We describe an efficient method for vision-based localisation, which combines sequential Monte Carlo tracking with matching ground-level images to


Semantic Image Based Geolocation Given a Map
An approach for geo-locating a novel view and determining camera location and orientation using a map and a sparse set of geo-tagged reference views and evaluates the approach for building identification and geo-localization on a new challenging outdoors urban dataset exhibiting large variations in appearance and viewpoint.
IM2GPS: estimating geographic information from a single image
This paper proposes a simple algorithm for estimating a distribution over geographic locations from a single image using a purely data-driven scene matching approach and shows that geolocation estimates can provide the basis for numerous other image understanding tasks such as population density estimation, land cover estimation or urban/rural classification.
Automated Map Reading: Image Based Localisation in 2-D Maps Using Binary Semantic Descriptors
A novel approach to image based localisation in urban environments which uses semantic matching between images and a 2-D cartographic map, which significantly increases scalability and has the potential for greater invariance to variable imaging conditions.
Image-Based Geo-Localization Using Satellite Imagery
A Markov localization framework is proposed that enforces the temporal consistency between image frames to enhance the geo-localization results in the case where a video stream of ground view images is available and can continuously localize the vehicle within a small error on the authors' Singapore dataset.
Find your way by observing the sun and other semantic cues
This paper utilizes freely available cartographic maps and derives a probabilistic model that exploits semantic cues in the form of sun direction, presence of an intersection, road type, speed limit and ego-car trajectory to produce very reliable localization results.
Wide-Area Image Geolocalization with Aerial Reference Imagery
We propose to use deep convolutional neural networks to address the problem of cross-view image geolocalization, in which the geolocation of a ground-level query image is estimated by matching to
Learning deep representations for ground-to-aerial geolocalization
This work localizes a ground-level query image by matching it to a reference database of aerial imagery and shows the effectiveness of Where-CNN in finding matches between street view and aerial view imagery and the ability of the learned features to generalize to novel locations.
OpenStreetSLAM: Global vehicle localization using OpenStreetMaps
An approach for global vehicle localization that combines visual odometry with map information from OpenStreetMaps to provide robust and accurate estimates for the vehicle's position is proposed, indicating in parallel the potential that map data can bring to the global localization task.
Cross-View Image Matching for Geo-Localization in Urban Environments
A new framework for cross-view image geo-localization is presented by taking advantage of the tremendous success of deep convolutional neural networks (CNNs) in image classification and object detection and is able to generalize to images at unseen locations.
Global Localization on OpenStreetMap Using 4-bit Semantic Descriptors
This paper proposes an approach that builds upon publicly available map information from OpenStreetMap and turns them into a compact map representation that can be used for Monte Carlo localization, which requires to store only a tiny 4-bit descriptor per location and is still able to globally localize and track a vehicle.