• Corpus ID: 32000002

Self-Supervised Intrinsic Image Decomposition

@inproceedings{Janner2017SelfSupervisedII,
  title={Self-Supervised Intrinsic Image Decomposition},
  author={Michael Janner and Jiajun Wu and Tejas D. Kulkarni and Ilker Yildirim and Joshua B. Tenenbaum},
  booktitle={NIPS},
  year={2017}
}
Intrinsic decomposition from a single image is a highly challenging task, due to its inherent ambiguity and the scarcity of training data. In contrast to traditional fully supervised learning approaches, in this paper we propose learning intrinsic image decomposition by explaining the input image. Our model, the Rendered Intrinsics Network (RIN), joins together an image decomposition pipeline, which predicts reflectance, shape, and lighting conditions given a single image, with a recombination… 

Figures and Tables from this paper

Unsupervised Learning for Intrinsic Image Decomposition From a Single Image
TLDR
This paper proposes a novel unsupervised intrinsic image decomposition framework, which relies on neither labeled training data nor hand-crafted priors, and directly learns the latent feature of reflectance and shading from unsuper supervised and uncorrelated data.
Learning Intrinsic Image Decomposition from Watching the World
  • Zhengqi Li, Noah Snavely
  • Computer Science
    2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition
  • 2018
TLDR
This paper explores a different approach to learning intrinsic images: observing image sequences over time depicting the same scene under changing illumination, and learning single-view decompositions that are consistent with these changes.
Single Image Intrinsic Decomposition Without a Single Intrinsic Image
TLDR
This paper presents a two stream convolutional neural network framework that is capable of learning the decomposition effectively in the absence of any ground truth intrinsic images, and can be easily extended to a (semi-)supervised setup.
Leveraging Multi-view Image Sets for Unsupervised Intrinsic Image Decomposition and Highlight Separation
TLDR
An unsupervised approach for factorizing object appearance into highlight, shading, and albedo layers, trained by multi-view real images is presented, with a proposed image representation based on local color distributions that allows training to be insensitive to the local misalignments of multi-View images.
Deep Unsupervised Intrinsic Image Decomposition by Siamese Training
TLDR
This work proposes an end-to-end deep learning solution that can be trained without any ground truth supervision, as this is hard to obtain.
Separate in Latent Space: Unsupervised Single Image Layer Separation
TLDR
Experimental results demonstrate its successfulness in outperforming existing unsupervised methods in both synthetic and real world tasks, and its ability to solve a more challenging multi-layer separation task.
Intrinsic Decomposition by Learning from Varying Lighting Conditions
TLDR
This paper tackles the problem of estimating the diffuse reflectance from a sequence of images captured from a fixed viewpoint under various illuminations and proposes a deep learning approach to avoid heuristics and strong assumptions on the reflectance prior.
PIE-Net: Photometric Invariant Edge Guided Network for Intrinsic Image Decomposition
TLDR
It is shown that the proposed end-to-end edge-driven hybrid CNN approach for intrinsic image decomposition obtains state of the art performance and is able to generalise well to real world images.
An Optical Physics Inspired CNN Approach for Intrinsic Image Decomposition
TLDR
Through experimental results, it is shown that the proposed methodology outperforms the existing deep learning-based IID techniques and the derived parameters improve the efficacy significantly.
...
...

References

SHOWING 1-10 OF 32 REFERENCES
Direct Intrinsics: Learning Albedo-Shading Decomposition by Convolutional Regression
TLDR
The strategy is to learn a convolutional neural network that directly predicts output albedo and shading channels from an input RGB image patch, which outperforms all prior work, including methods that rely on RGB+Depth input.
Deep Reflectance Maps
TLDR
A convolutional neural architecture to estimate reflectance maps of specular materials in natural lighting conditions is proposed in an end-to-end learning formulation that directly predicts a reflectance map from the image itself.
Intrinsic images in the wild
TLDR
This paper introduces Intrinsic Images in the Wild, a large-scale, public dataset for evaluating intrinsic image decompositions of indoor scenes, and develops a dense CRF-based intrinsic image algorithm for images in the wild that outperforms a range of state-of-the-art intrinsic image algorithms.
Learning Non-Lambertian Object Intrinsics Across ShapeNet Categories
TLDR
This work focuses on the non-Lambertian object-level intrinsic problem of recovering diffuse albedo, shading, and specular highlights from a single image of an object, and shows that feature learning at the encoder stage is more crucial for developing a universal representation across categories.
Deriving intrinsic images from image sequences
  • Yair Weiss
  • Mathematics
    Proceedings Eighth IEEE International Conference on Computer Vision. ICCV 2001
  • 2001
TLDR
Following recent work on the statistics of natural images, a prior is used that assumes that illumination images will give rise to sparse filter outputs and this leads to a simple, novel algorithm for recovering reflectance images.
Deep Lambertian Networks
TLDR
A multilayer generative model where the latent variables include the albedo, surface normals, and the light source is introduced, and it is demonstrated that this model is able to generalize as well as improve over standard baselines in one-shot face recognition.
Learning lightness from human judgement on relative reflectance
TLDR
This work develops a new approach to inferring lightness, the perceived reflectance of surfaces, from a single image, which incorporates multiple shading/reflectance priors and simultaneous reasoning between pairs of pixels in a dense conditional random field formulation.
Ground truth dataset and baseline evaluations for intrinsic image algorithms
TLDR
This work presents a ground-truth dataset of intrinsic image decompositions for a variety of real-world objects, and uses this dataset to quantitatively compare several existing algorithms.
Category-specific object reconstruction from a single image
TLDR
An automated pipeline with pixels as inputs and 3D surfaces of various rigid categories as outputs in images of realistic scenes is introduced, that can be driven by noisy automatic object segmentations and complement with a bottom-up module for recovering high-frequency shape details.
Shape, Illumination, and Reflectance from Shading
TLDR
The technique can be viewed as a superset of several classic computer vision problems (shape-from-shading, intrinsic images, color constancy, illumination estimation, etc) and outperforms all previous solutions to those constituent problems.
...
...