• Corpus ID: 220347528

ODE-CNN: Omnidirectional Depth Extension Networks

  title={ODE-CNN: Omnidirectional Depth Extension Networks},
  author={Xinjing Cheng and Peng Wang and Yanqi Zhou and Chenye Guan and Ruigang Yang},
Omnidirectional 360° camera proliferates rapidly for autonomous robots since it significantly enhances the perception ability by widening the field of view(FoV). However, corresponding 360° depth sensors, which are also critical for the perception system, are still difficult or expensive to have. In this paper, we propose a low-cost 3D sensing system that combines an omnidirectional camera with a calibrated projective depth camera, where the depth from the limited FoV can be automatically… 
UniFuse: Unidirectional Fusion for 360° Panorama Depth Estimation
A new framework to fuse features from the two projections, unidirectionally feeding the cubemap features to the equirectangular features only at the decoding stage is introduced, which is much more efficient and also designed a more effective fusion module for the fusion scheme.
LEAD: LiDAR Extender for Autonomous Driving
This paper proposes a multi-stage propagation strategy based on depth distributions and uncertainty map, which shows effective propagation ability and utilizes a high-precise laser scanner to generate ground-truth dataset to validate the LiDAR extension quality.


SphereNet: Learning Spherical Representations for Detection and Classification in Omnidirectional Images
This work presents SphereNet, a novel deep learning framework which encodes invariance against such distortions explicitly into convolutional neural networks, and enables the transfer of existing perspective convolutionAL neural network models to the omnidirectional case.
Deep convolutional neural fields for depth estimation from a single image
A deep structured learning scheme which learns the unary and pairwise potentials of continuous CRF in a unified deep CNN framework and can be used for depth estimations of general scenes with no geometric priors nor any extra information injected.
Learning Depth from Single Monocular Images Using Deep Convolutional Neural Fields
A deep convolutional neural field model for estimating depths from single monocular images, aiming to jointly explore the capacity of deep CNN and continuous CRF is presented, and a deep structured learning scheme which learns the unary and pairwise potentials of continuousCRF in a unified deep CNN framework is proposed.
OmniDepth: Dense Depth Estimation for Indoors Spherical Panoramas
It is shown that monocular depth estimation models trained on traditional images produce sub-optimal results on omnidirectional images, showcasing the need for training directly on high quality datasets with ground truth depth annotations, which however, are hard to acquire.
Learning Spherical Convolution for Fast Features from 360° Imagery
This work proposes to learn a spherical convolutional network that translates a planar CNN to process 360{\deg} imagery directly in its equirectangular projection, and yields the most accurate results while saving orders of magnitude in computation versus the existing exact reprojection solution.
Graph-Based Classification of Omnidirectional Images
  • P. Frossard, R. Khasanova
  • Mathematics, Computer Science
    2017 IEEE International Conference on Computer Vision Workshops (ICCVW)
  • 2017
This paper proposes a principled way of graph construction such that convolutional filters respond similarly for the same pattern on different positions of the image regardless of lens distortions, and shows that the proposed method outperforms current techniques for the omnidirectional image classification problem.
Distortion-Aware Convolutional Filters for Dense Prediction in Panoramic Images
This work proposes a learning approach for panoramic depth map estimation from a single image, thanks to a specifically developed distortion-aware deformable convolution filter, which can be trained by means of conventional perspective images, then used to regress depth forPanoramic images, thus bypassing the effort needed to create annotated pan oramic training dataset.
Depth Estimation via Affinity Learned with Convolutional Spatial Propagation Network
This paper proposes a simple yet effective convolutional spatial propagation network (CSPN) to learn the affinity matrix for depth prediction, and adopts an efficient linear propagation model.
LEGO: Learning Edge with Geometry all at Once by Watching Videos
This paper introduces a "3D as-smooth-as-possible (3D-ASAP)" prior inside the pipeline, which enables joint estimation of edges and 3D scene, yielding results with significant improvement in accuracy for fine detailed structures.
Self-Supervised Sparse-to-Dense: Self-Supervised Depth Completion from LiDAR and Monocular Camera
A deep regression model is developed to learn a direct mapping from sparse depth (and color images) input to dense depth prediction and a self-supervised training framework that requires only sequences of color and sparse depth images, without the need for dense depth labels is proposed.