Corpus ID: 224704598

SelfVoxeLO: Self-supervised LiDAR Odometry with Voxel-based Deep Neural Networks

@article{Xu2020SelfVoxeLOSL,
  title={SelfVoxeLO: Self-supervised LiDAR Odometry with Voxel-based Deep Neural Networks},
  author={Yan Xu and Zhaoyang Huang and Kwan-Yee Lin and Xinge Zhu and Jianping Shi and Hujun Bao and Guofeng Zhang and Hongsheng Li},
  journal={ArXiv},
  year={2020},
  volume={abs/2010.09343}
}
Recent learning-based LiDAR odometry methods have demonstrated their competitiveness. However, most methods still face two substantial challenges: 1) the 2D projection representation of LiDAR data cannot effectively encode 3D structures from the point clouds; 2) the needs for a large amount of labeled data for training limit the application scope of these methods. In this paper, we propose a self-supervised LiDAR odometry method, dubbed SelfVoxeLO, to tackle these two difficulties. Specifically… Expand
3 Citations

Figures and Tables from this paper

UnDeepLIO: Unsupervised Deep Lidar-Inertial Odometry
  • Yiming Tu, Jin Xie
  • Computer Science
  • 2021
Extensive research efforts have been dedicated to deep learning based odometry. Nonetheless, few efforts are made on the unsupervised deep lidar odometry. In this paper, we design a novel frameworkExpand
Voxel-based representation of 3D point clouds: Methods, applications, and its potential use in the construction industry
TLDR
A thorough review of the state-of-the-art methods and applications of voxel-based point cloud representations from a collection of papers in the recent decade is conducted, focusing on the creation and utilization ofvoxels, as well as the strengths and weaknesses of various methods using voxels. Expand
VS-Net: Voting with Segmentation for Visual Localization
TLDR
A novel prototype-based triplet loss with hard negative mining is proposed, which is able to train semantic segmentation networks with a large number of labels efficiently and can outperform state-of-the-art visual localization methods. Expand

References

SHOWING 1-10 OF 39 REFERENCES
Unsupervised Geometry-Aware Deep LiDAR Odometry
TLDR
This work focuses on unsupervised learning for LiDAR odometry (LO) without trainable labels, and introduces the uncertainty-aware loss with geometric confidence, thereby al-lowing the reliability of the proposed pipeline. Expand
L3-Net: Towards Learning Based LiDAR Localization for Autonomous Driving
TLDR
This work innovatively implements the use of various deep neural network structures to establish a learning-based LiDAR localization system that achieves centimeter-level localization accuracy, comparable to prior state-of-the-art systems with hand-crafted pipelines. Expand
LO-Net: Deep Real-Time Lidar Odometry
TLDR
A novel deep convolutional network pipeline, LO-Net, for real-time lidar odometry estimation that outperforms existing learning based approaches and has similar accuracy with the state-of-the-art geometry-based approach, LOAM. Expand
Frustum PointNets for 3D Object Detection from RGB-D Data
TLDR
This work directly operates on raw point clouds by popping up RGBD scans and leverages both mature 2D object detectors and advanced 3D deep learning for object localization, achieving efficiency as well as high recall for even small objects. Expand
Pseudo-LiDAR++: Accurate Depth for 3D Object Detection in Autonomous Driving
TLDR
This paper provides substantial advances to the pseudo-LiDAR framework through improvements in stereo depth estimation, and proposes a depth-propagation algorithm, guided by the initial depth estimates, to diffuse these few exact measurements across the entire depth map. Expand
SECOND: Sparsely Embedded Convolutional Detection
TLDR
An improved sparse convolution method for Voxel-based 3D convolutional networks is investigated, which significantly increases the speed of both training and inference and introduces a new form of angle loss regression to improve the orientation estimation performance. Expand
CNN for IMU assisted odometry estimation using velodyne LiDAR
TLDR
This work introduces a novel method for odometry estimation using convolutional neural networks from 3D LiDAR scans and proposes alternative CNNs trained for the prediction of rotational motion parameters while achieving results also comparable with state of the art method LOAM. Expand
FlowNet3D: Learning Scene Flow in 3D Point Clouds
  • Xingyu Liu, C. Qi, L. Guibas
  • Computer Science
  • 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)
  • 2019
TLDR
This work proposes a novel deep neural network named FlowNet3D that learns scene flow from point clouds in an end-to-end fashion and successfully generalizes to real scans, outperforming various baselines and showing competitive results to the prior art. Expand
PointPillars: Fast Encoders for Object Detection From Point Clouds
TLDR
benchmarks suggest that PointPillars is an appropriate encoding for object detection in point clouds, and proposes a lean downstream network. Expand
VoxelNet: End-to-End Learning for Point Cloud Based 3D Object Detection
  • Yin Zhou, Oncel Tuzel
  • Computer Science
  • 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition
  • 2018
TLDR
VoxelNet is proposed, a generic 3D detection network that unifies feature extraction and bounding box prediction into a single stage, end-to-end trainable deep network and learns an effective discriminative representation of objects with various geometries, leading to encouraging results in3D detection of pedestrians and cyclists. Expand
...
1
2
3
4
...