Corpus ID: 227126953

CORAL: Colored structural representation for bi-modal place recognition

@article{Pan2020CORALCS,
  title={CORAL: Colored structural representation for bi-modal place recognition},
  author={Yiyuan Pan and Xuecheng Xu and Weijie Li and Yue Wang and Rong Xiong},
  journal={ArXiv},
  year={2020},
  volume={abs/2011.10934}
}
Place recognition is indispensable for drift-free localization system. Due to the variations of the environment, place recognition using single modality has limitations. In this paper, we propose a bi-modal place recognition method, which can extract compound global descriptor from the two modalities, vision and LiDAR. Specifically, we build elevation image generated from point cloud modality as a discriminative structural representation. Based on the 3D information, we derive the… Expand
MinkLoc++: Lidar and Monocular Image Fusion for Place Recognition
TLDR
This work introduces a discriminative multimodal descriptor based on a pair of sensor readings: a point cloud from a LiDAR and an image from an RGB camera, and uses late fusion approach, where each modality is processed separately and fused in the final part of the processing pipeline. Expand
Radar-to-Lidar: Heterogeneous Place Recognition via Joint Learning
TLDR
A heterogeneous measurement based framework is proposed for long-term place recognition, which retrieves the query radar scans from the existing lidar (Light Detection and Ranging) maps. Expand

References

SHOWING 1-10 OF 38 REFERENCES
Large-Scale Place Recognition Based on Camera-LiDAR Fused Descriptor
TLDR
A fusion network which robustly captures both the image and point cloud descriptors to solve the place recognition problem and the experiments show that the proposed fused descriptor is more robust and discriminative than the single sensor descriptor. Expand
1-Day Learning, 1-Year Localization: Long-Term LiDAR Localization Using Scan Context Image
TLDR
A long-term localization method that effectively exploits the structural information of an environment via an image format and is faster than existing methods proposed for place recognition because it avoids a pairwise comparison between a query and scans in a database. Expand
SHOT: Unique signatures of histograms for surface and texture description
TLDR
A thorough experimental evaluation vouches that SHOT outperforms state-of-the-art local descriptors in experiments addressing descriptor matching for object recognition, 3D reconstruction and shape retrieval. Expand
NetVLAD: CNN Architecture for Weakly Supervised Place Recognition
TLDR
A convolutional neural network architecture that is trainable in an end-to-end manner directly for the place recognition task and an efficient training procedure which can be applied on very large-scale weakly labelled tasks are developed. Expand
PointNetVLAD: Deep Point Cloud Based Retrieval for Large-Scale Place Recognition
TLDR
This paper proposes a combination/modification of the existing PointNet and NetVLAD, which allows end-to-end training and inference to extract the global descriptor from a given 3D point cloud, and proposes the "lazy triplet and quadruplet" loss functions that can achieve more discriminative and generalizable global descriptors to tackle the retrieval task. Expand
LocNet: Global Localization in 3D Point Clouds for Mobile Vehicles
TLDR
A semi-handcrafted representation learning method for LiDAR point clouds using siamese LocNets, which states the place recognition problem to a similarity modeling problem and a global localization framework with range-only observations is proposed. Expand
PointNet: Deep Learning on Point Sets for 3D Classification and Segmentation
TLDR
This paper designs a novel type of neural network that directly consumes point clouds, which well respects the permutation invariance of points in the input and provides a unified architecture for applications ranging from object classification, part segmentation, to scene semantic parsing. Expand
LPD-Net: 3D Point Cloud Learning for Large-Scale Place Recognition and Environment Analysis
TLDR
A novel deep neural network, named LPD-Net (Large-scale Place Description Network), which can extract discriminative and generalizable global descriptors from the raw 3D point cloud and reaches the state-of-the-art. Expand
Large-Scale Image Retrieval with Attentive Deep Local Features
TLDR
An attentive local feature descriptor suitable for large-scale image retrieval, referred to as DELE (DEep Local Feature), based on convolutional neural networks, which are trained only with image-level annotations on a landmark image dataset. Expand
Are we ready for autonomous driving? The KITTI vision benchmark suite
TLDR
The autonomous driving platform is used to develop novel challenging benchmarks for the tasks of stereo, optical flow, visual odometry/SLAM and 3D object detection, revealing that methods ranking high on established datasets such as Middlebury perform below average when being moved outside the laboratory to the real world. Expand
...
1
2
3
4
...