Corpus ID: 229923279

Unsupervised Monocular Depth Reconstruction of Non-Rigid Scenes

@article{Takmaz2020UnsupervisedMD,
  title={Unsupervised Monocular Depth Reconstruction of Non-Rigid Scenes},
  author={Aycca Takmaz and D. Paudel and Thomas Probst and Ajad Chhatkuli and Martin R. Oswald and L. Gool},
  journal={ArXiv},
  year={2020},
  volume={abs/2012.15680}
}
Monocular depth reconstruction of complex and dynamic scenes is a highly challenging problem. While for rigid scenes learning-based methods have been offering promising results even in unsupervised cases, there exists little to no literature addressing the same for dynamic and deformable scenes. In this work, we present an unsupervised monocular framework for dense depth estimation of dynamic scenes, which jointly reconstructs rigid and nonrigid parts without explicitly modelling the camera… Expand

References

SHOWING 1-10 OF 87 REFERENCES
Unsupervised Monocular Depth Learning in Dynamic Scenes
TLDR
This work shows that this apparently heavily underdetermined problem can be regularized by imposing the following prior knowledge about 3D translation fields: they are sparse, since most of the scene is static, and they tend to be constant for rigid moving objects. Expand
Video Pop-up: Monocular 3D Reconstruction of Dynamic Scenes
TLDR
An unsupervised approach to the challenging problem of simultaneously segmenting the scene into its constituent objects and reconstructing a 3D model of the scene and evaluating the motion segmentation functionality of the approach on the Berkeley Motion Segmentation Dataset. Expand
Unsupervised Learning of Depth and Ego-Motion from Monocular Video Using 3D Geometric Constraints
TLDR
The main contribution is to explicitly consider the inferred 3D geometry of the whole scene, and enforce consistency of the estimated 3D point clouds and ego-motion across consecutive frames, and outperforms the state-of-the-art for both breadth and depth. Expand
Unsupervised Monocular Depth Estimation with Left-Right Consistency
TLDR
This paper proposes a novel training objective that enables the convolutional neural network to learn to perform single image depth estimation, despite the absence of ground truth depth data, and produces state of the art results for monocular depth estimation on the KITTI driving dataset. Expand
3D Scene Flow Estimation with a Piecewise Rigid Scene Model
TLDR
This work proposes to represent the dynamic scene as a collection of rigidly moving planes, into which the input images are segmented, and shows that a view-consistent multi-frame scheme significantly improves accuracy, especially in the presence of occlusions, and increases robustness against adverse imaging conditions. Expand
GeoNet: Unsupervised Learning of Dense Depth, Optical Flow and Camera Pose
TLDR
An adaptive geometric consistency loss is proposed to increase robustness towards outliers and non-Lambertian regions, which resolves occlusions and texture ambiguities effectively and achieves state-of-the-art results in all of the three tasks, performing better than previously unsupervised methods and comparably with supervised ones. Expand
Dense Depth Estimation of a Complex Dynamic Scene without Explicit 3D Motion Estimation
TLDR
This work shows that, given per-pixel optical flow correspondences between two consecutive frames and, the sparse depth prior for the reference frame, it can effectively recover the dense depth map for the successive frames without solving for 3D motion parameters. Expand
Digging Into Self-Supervised Monocular Depth Estimation
TLDR
It is shown that a surprisingly simple model, and associated design choices, lead to superior predictions, and together result in both quantitatively and qualitatively improved depth maps compared to competing self-supervised methods. Expand
Semi-Supervised Deep Learning for Monocular Depth Map Prediction
TLDR
This paper proposes a novel approach to depth map prediction from monocular images that learns in a semi-supervised way and uses sparse ground-truth depth for supervised learning, and also enforces the deep network to produce photoconsistent dense depth maps in a stereo setup using a direct image alignment loss. Expand
3D Packing for Self-Supervised Monocular Depth Estimation
TLDR
This work proposes a novel self-supervised monocular depth estimation method combining geometry with a new deep network, PackNet, learned only from unlabeled monocular videos, which outperforms other self, semi, and fully supervised methods on the KITTI benchmark. Expand
...
1
2
3
4
5
...