A Lightweight Approach for On-the-Fly Reflectance Estimation

  title={A Lightweight Approach for On-the-Fly Reflectance Estimation},
  author={Kihwan Kim and Jinwei Gu and Stephen Tyree and Pavlo Molchanov and Matthias Nie{\ss}ner and Jan Kautz},
  journal={2017 IEEE International Conference on Computer Vision (ICCV)},
  • Kihwan Kim, Jinwei Gu, J. Kautz
  • Published 19 May 2017
  • Computer Science
  • 2017 IEEE International Conference on Computer Vision (ICCV)
Estimating surface reflectance (BRDF) is one key component for complete 3D scene capture, with wide applications in virtual reality, augmented reality, and human computer interaction. Prior work is either limited to controlled environments (e.g., gonioreflectometers, light stages, or multi-camera domes), or requires the joint optimization of shape, illumination, and reflectance, which is often computationally too expensive (e.g., hours of running time) for real-time applications. Moreover, most… 

Figures and Tables from this paper

BRDF-Reconstruction in Photogrammetry Studio Setups
This work presents a new formulation along with a practical solution to reduce constraints to photo studio like setups by jointly reconstructing the geometric configuration of the lights along with spatially varying surface reflectance properties and its diffuse albedo.
On Joint Estimation of Pose, Geometry and svBRDF From a Handheld Scanner
It is shown that optimizing over the poses is crucial for accurately recovering fine details and it is demonstrated that the approach naturally results in a semantically meaningful material segmentation.
Neural Inverse Rendering of an Indoor Scene From a Single Image
This work proposes the first learning based approach that jointly estimates albedo, normals, and lighting of an indoor scene from a single image, and uses physically-based rendering to create a large-scale synthetic dataset, named SUNCG-PBR, which is a significant improvement over prior datasets.
Real-Time Multi-Material Reflectance Reconstruction for Large-Scale Scenes Under Uncontrolled Illumination from RGB-D Image Sequences
This work uses a deep learning based method to estimate Ward BRDF parameters from observations gathered from individual segmented scene objects, and refine these reflectance parameters to allow for spatial variations across the object surfaces.
Two-Shot Spatially-Varying BRDF and Shape Estimation
This work proposes a novel deep learning architecture with a stage-wise estimation of shape and SVBRDF, and shows that the network trained on a synthetic dataset can generalize well to real-world images.
Inverse Path Tracing for Joint Material and Lighting Estimation
A novel optimization method using a differentiable Monte Carlo renderer that computes derivatives with respect to the estimated unknown illumination and material properties enables joint optimization for physically correct light transport and material models using a tailored stochastic gradient descent.
Light-Field Intrinsic Dataset
This work provides intrinsic dataset for real world and synthetic 4D and 3D light fields, which is also applicable for single image, multi-view stereo, and video, and performs, qualitative and quantitative, comparison of existing intrinsic decomposition algorithms.
Live Intrinsic Material Estimation
This work presents the first end-to-end approach for real-time material estimation for general object shapes that only requires a single color image as input and proposes a novel highly efficient perceptual rendering loss that mimics real-world image formation and obtains intermediate results even during run time.
High Dynamic Range SLAM with Map-Aware Exposure Time Control
This work replaces the simplistic pixel intensity averaging scheme with HDR color fusion rules tailored to the incremental nature of SLAM and a noise model suitable for off-the-shelf RGB-D cameras and reports a set of experiments demonstrating the improved texture quality and advantages of using the custom controller that is tightly integrated in the mapping loop.
LIME: Live Intrinsic Material Estimation
This work presents the first end-to-end approach for real-time material estimation for general object shapes with uniform material that only requires a single color image as input and proposes a novel highly efficient perceptual rendering loss that mimics real-world image formation and obtains intermediate results even during run time.


Simultaneous Localization and Appearance Estimation with a Consumer RGB-D Camera
A novel technique for estimating the spatially varying isotropic surface reflectance, solely from color and depth images captured with an RGB-D camera under unknown environment illumination, which demonstrates the substantially improved quality of estimated appearance on a variety of daily objects.
Shape and Reflectance Estimation in the Wild
This work directly tackle the problem of joint reflectance and geometry estimation under known but uncontrolled natural illumination by fully exploiting the surface orientation cues that become embedded in the appearance of the object by introducing two methods.
ScanNet: Richly-Annotated 3D Reconstructions of Indoor Scenes
This work introduces ScanNet, an RGB-D video dataset containing 2.5M views in 1513 scenes annotated with 3D camera poses, surface reconstructions, and semantic segmentations, and shows that using this data helps achieve state-of-the-art performance on several 3D scene understanding tasks.
BundleFusion: real-time globally consistent 3D reconstruction using on-the-fly surface re-integration
This work systematically addresses issues with a novel, real-time, end-to-end reconstruction framework, which outperforms state-of-the-art online systems with quality on par to offline methods, but with unprecedented speed and scan completeness.
Radiometric Scene Decomposition: Scene Reflectance, Illumination, and Geometry from RGB-D Images
This work uses RGB-D images to bootstrap geometry recovery and simultaneously recover the complex reflectance and natural illumination while refining the noisy initial geometry and segmenting the scene into different material regions, and handles real-world scenes consisting of multiple objects of unknown materials.
Real-time 3D reconstruction at scale using voxel hashing
An online system for large and fine scale volumetric reconstruction based on a memory and speed efficient data structure that compresses space, and allows for real-time access and updates of implicit surface data, without the need for a regular or hierarchical grid data structure.
KinectFusion: Real-time dense surface mapping and tracking
We present a system for accurate real-time mapping of complex and arbitrary indoor scenes in variable lighting conditions, using only a moving low-cost depth camera and commodity graphics hardware.
Deep Reflectance Maps
A convolutional neural architecture to estimate reflectance maps of specular materials in natural lighting conditions is proposed in an end-to-end learning formulation that directly predicts a reflectance map from the image itself.
Shape, Illumination, and Reflectance from Shading
The technique can be viewed as a superset of several classic computer vision problems (shape-from-shading, intrinsic images, color constancy, illumination estimation, etc) and outperforms all previous solutions to those constituent problems.
Learning Data-Driven Reflectance Priors for Intrinsic Image Decomposition
A model is trained to predict relative reflectance ordering between image patches from large-scale human annotations, producing a data-driven reflectance prior and it is shown how to naturally integrate this learned prior into existing energy minimization frame-works for intrinsic image decomposition.