Corpus ID: 218581404

Reference Pose Generation for Visual Localization via Learned Features and View Synthesis

@article{Zhang2020ReferencePG,
  title={Reference Pose Generation for Visual Localization via Learned Features and View Synthesis},
  author={Zichao Zhang and Torsten Sattler and D. Scaramuzza},
  journal={ArXiv},
  year={2020},
  volume={abs/2005.05179}
}
Visual Localization is one of the key enabling technologies for autonomous driving and augmented reality. High quality datasets with accurate 6 Degree-of-Freedom (DoF) reference poses are the foundation for benchmarking and improving existing methods. Traditionally, reference poses have been obtained via Structure-from-Motion (SfM). However, SfM itself relies on local features which are prone to fail when images were taken under different conditions, e.g., day/night changes. At the same time… Expand
7 Citations
Benchmarking Image Retrieval for Visual Localization
Using Image Sequences for Long-Term Visual Localization
Large-scale Localization Datasets in Crowded Indoor Spaces
Image Stylization for Robust Features
Robust Image Retrieval-based Visual Localization using Kapture

References

SHOWING 1-10 OF 159 REFERENCES
Understanding the Limitations of CNN-Based Absolute Camera Pose Regression
Local Supports Global: Deep Camera Relocalization With Sequence Enhancement
Benchmarking 6DOF Outdoor Visual Localization in Changing Conditions
Real-Time RGB-D Camera Pose Estimation in Novel Scenes Using a Relocalisation Cascade
To Learn or Not to Learn: Visual Localization from Essential Matrices
InLoc: Indoor Visual Localization with Dense Matching and View Synthesis
Are Large-Scale 3D Models Really Necessary for Accurate Visual Localization?
Deep Auxiliary Learning for Visual Localization and Odometry
Prior Guided Dropout for Robust Visual Localization in Dynamic Environments
...
1
2
3
4
5
...