DCL: Differential Contrastive Learning for Geometry-Aware Depth Synthesis

  title={DCL: Differential Contrastive Learning for Geometry-Aware Depth Synthesis},
  author={Yanchao Yang and Yuefan Shen and Youyi Zheng and C. Karen Liu and Leonidas J. Guibas},
  journal={IEEE Robotics and Automation Letters},
We describe a method for unpaired realistic depth synthesis that learns diverse variations from the real-world depth scans and ensures geometric consistency between the synthetic and synthesized depth. The synthesized realistic depth can then be used to train task-specific networks facilitating label transfer from the synthetic domain. Unlike existing image synthesis pipelines, where geometries are mostly ignored, we treat geometries carried by the depth scans based on their own existence. We… 
1 Citations

Domain Adaptation on Point Clouds via Geometry-Aware Implicits

This work proposes a simple yet effective method for unsupervised domain adaptation on point clouds by employing a self-supervised task of learning geometry-aware implicits, which plays two critical roles in one shot: first, the geometric information in the point clouds is preserved through the implicit representations for downstream tasks, and second, the domain-specific variations can be effectively learned away in the implicit space.



T2Net: Synthetic-to-Realistic Translation for Solving Single-Image Depth Estimation Tasks

A framework that comprises an image translation network for enhancing realism of input images, followed by a depth prediction network that can be trained end-to-end, leading to good results, even surpassing early deep-learning methods that use real paired data.

Scribbler: Controlling Deep Image Synthesis with Sketch and Color

A deep adversarial image synthesis architecture that is conditioned on sketched boundaries and sparse color strokes to generate realistic cars, bedrooms, or faces is proposed and demonstrates a sketch based image synthesis system which allows users to scribble over the sketch to indicate preferred color for objects.

Coupled Real-Synthetic Domain Adaptation for Real-World Deep Depth Enhancement

A coupled real-synthetic domain adaptation method is proposed, which enables domain transfer between high-quality depth simulators and real depth camera information for super-resolution depth recovery.

DDRNet: Depth Map Denoising and Refinement for Consumer Depth Cameras Using Cascaded CNNs

A cascaded Depth Denoising and Refinement Network (DDRNet) to tackle this problem by leveraging the multi-frame fused geometry and the accompanying high quality color image through a joint training strategy and achieves superior performance over the state-of-the-art techniques.

Real-Time Monocular Depth Estimation Using Synthetic Data with Domain Adaptation via Image Style Transfer

This work takes advantage of style transfer and adversarial training to predict pixel perfect depth from a single real-world color image based on training over a large corpus of synthetic environment data.

What Synthesis Is Missing: Depth Adaptation Integrated With Weak Supervision for Indoor Scene Parsing

This work addresses the goal of exploiting synthetic data where feasible and integrating weak supervision where necessary by utilizing depth as transfer domain because its synthetic-to-real discrepancy is much lower than for color.

Channel Attention Based Iterative Residual Learning for Depth Map Super-Resolution

This paper proposes a new framework for real-world DSR, which consists of four modules: an iterative residual learning module with deep supervision to learn effective high-frequency components of depth maps in a coarse-to-fine manner, and a depth refinement module to improve the depth map by TGV regularization and input loss.

Self-Supervised Deep Depth Denoising

A fully convolutional deep autoencoder that learns to denoise depth maps, surpassing the lack of ground truth data and demonstrating the effectiveness of the proposed self-supervised denoising approach on established 3D reconstruction applications.

A Supervised Approach to Predicting Noise in Depth Images

This work uses a convolutional neural network to predict which pixels of a simulated noise-free depth image will not have returns (no-depth-return pixels, or NDP), and shows that the popular ICP algorithm for object pose estimation fails more realistically on CNN-corrupted simulated depth images than on uncorrupted depth images and unsupervised domain adaptation baselines.