Two-Shot Spatially-Varying BRDF and Shape Estimation

@article{Boss2020TwoShotSB,
  title={Two-Shot Spatially-Varying BRDF and Shape Estimation},
  author={Mark Boss and V. Jampani and Kihwan Kim and Hendrik P. A. Lensch and Jan Kautz},
  journal={2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
  year={2020},
  pages={3981-3990}
}
  • Mark Boss, V. Jampani, +2 authors J. Kautz
  • Published 1 April 2020
  • Computer Science
  • 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)
Capturing the shape and spatially-varying appearance (SVBRDF) of an object from images is a challenging task that has applications in both computer vision and graphics. Traditional optimization-based approaches often need a large number of images taken from multiple views in a controlled environment. Newer deep learning-based approaches require only a few input images, but the reconstruction quality is not on par with optimization techniques. We propose a novel deep learning architecture with a… Expand

Figures and Tables from this paper

Joint SVBRDF Recovery and Synthesis From a Single Image using an Unsupervised Generative Adversarial Network
TLDR
An unsupervised generative adversarial neural network is presented that addresses both SVBRDF capture from a single image and synthesis at the same time and provides higher quality rendering results with more details compared to the previous works. Expand
SVBRDF Recovery From a Single Image With Highlights using a Pretrained Generative Adversarial Network
TLDR
This paper uses an unsupervised generative adversarial neural network to recover SVBRDFs maps with a single image as input and adds the hypothesis that the material is stationary and introduces a new loss function based on Fourier coefficients to enforce this stationarity. Expand
ABO: Dataset and Benchmarks for Real-World 3D Object Understanding
We introduce Amazon-Berkeley Objects (ABO), a new large-scale dataset of product images and 3D models corresponding to real household objects. We use this realistic, object-centric 3D dataset toExpand
Highlight-aware two-stream network for single-image SVBRDF acquisition
  • Jie Guo, Shuichang Lai, +4 authors Ling-Qi Yan
  • 2021
This paper addresses the task of estimating spatially-varying reflectance (i.e., SVBRDF) from a single, casually captured image. Central to our method is a highlight-aware (HA) convolution operationExpand
Neural-PIL: Neural Pre-Integrated Lighting for Reflectance Decomposition
TLDR
A novel reflectance decomposition network that can estimate shape, BRDF and per-image illumination given a set of object images captured under varying illumination is proposed, enabling more accurate novel view-synthesis and relighting compared to prior art. Expand
Shape and Material Capture at Home
TLDR
This paper proposes a simple data capture technique in which the user goes around the object, illuminating it with a flashlight and capturing only a few images, and introduces a recursive neural architecture, termed RecNet, which can predict geometry and reflectance at 2k ×2k resolution given an input image at 2K×2k. Expand
Learning Implicit Surface Light Fields
TLDR
This work proposes a novel implicit representation for capturing the visual appearance of an object in terms of its surface light field and shows that the proposed representation can be embedded into a variational auto-encoder for generating novel appearances that conform to the specified illumination conditions. Expand
Sparse Needlets for Lighting Estimation with Spherical Transport Loss
TLDR
NeedleLight is presented, a new lighting estimation model that represents illumination with needlets and allows lighting estimation in both frequency domain and spatial domain jointly and a new metric is proposed that is concise yet effective by directly evaluating the estimated illumination maps rather than rendered images. Expand
GMLight: Lighting Estimation via Geometric Distribution Approximation
TLDR
Geometric Mover’s Light (GMLight) is presented, a lighting estimation framework that employs a regression network and a generative projector for effective illumination estimation that achieves accurate illumination estimation and superior fidelity in relighting for 3D object insertion. Expand
Deep Neural Models for Illumination Estimation and Relighting: A Survey
TLDR
This contribution aims to bring together in a coherent manner current advances in this conjunction, presented in three categories: scene illumination estimation, relighting with reflectance‐aware scene‐specific representations and finally relighting as image‐to‐image transformations. Expand
...
1
2
3
...

References

SHOWING 1-10 OF 69 REFERENCES
Learning to reconstruct shape and spatially-varying reflectance from a single image
TLDR
This work demonstrates that it can recover non-Lambertian, spatially-varying BRDFs and complex geometry belonging to any arbitrary shape class, from a single RGB image captured under a combination of unknown environment illumination and flash lighting. Expand
Flexible SVBRDF Capture with a Multi‐Image Deep Network
TLDR
This work presents a deep‐learning method capable of estimating material appearance from a variable number of uncalibrated and unordered pictures captured with a handheld camera and flash, and shows how the method improves its prediction with the number of input pictures, and reaches high quality reconstructions with as little as 1 to 10 images. Expand
CGIntrinsics: Better Intrinsic Image Decomposition through Physically-Based Rendering
TLDR
CGIntrinsics, a new, large-scale dataset of physically-based rendered images of scenes with full ground truth decompositions, is presented, demonstrating the suprising effectiveness of carefully-rendered synthetic data for the intrinsic images task. Expand
Single-image SVBRDF capture with a rendering-aware deep network
TLDR
This work tackles lightweight appearance capture by training a deep neural network to automatically extract and make sense of visual cues from a single image, and designs a network that combines an encoder-decoder convolutional track for local feature extraction with a fully-connected track for global feature extraction and propagation. Expand
Materials for Masses: SVBRDF Acquisition with a Single Mobile Phone Image
TLDR
This work proposes a material acquisition approach to recover the spatially-varying BRDF and normal map of a near-planar surface from a single image captured by a handheld mobile phone camera, thereby avoiding shadows while simultaneously capturing high-frequency specular highlights. Expand
LIME: Live Intrinsic Material Estimation
TLDR
This work presents the first end-to-end approach for real-time material estimation for general object shapes with uniform material that only requires a single color image as input and proposes a novel highly efficient perceptual rendering loss that mimics real-world image formation and obtains intermediate results even during run time. Expand
Deep image-based relighting from optimal sparse samples
TLDR
This work presents an image-based relighting method that can synthesize scene appearance under novel, distant illumination from the visible hemisphere, from only five images captured under pre-defined directional lights, and demonstrates, on both synthetic and real scenes, that this method is able to reproduce complex, high-frequency lighting effects like specularities and cast shadows. Expand
Two-shot SVBRDF capture for stationary materials
TLDR
This paper shows that the texturedness assumption allows reflectance capture using only two images of a planar sample, taken with and without a headlight flash, and describes the material as spatially-varying, diffuse and specular, anisotropic reflectance over a detailed normal map. Expand
Learning Non-Lambertian Object Intrinsics Across ShapeNet Categories
TLDR
This work focuses on the non-Lambertian object-level intrinsic problem of recovering diffuse albedo, shading, and specular highlights from a single image of an object, and shows that feature learning at the encoder stage is more crucial for developing a universal representation across categories. Expand
Learning Intrinsic Image Decomposition from Watching the World
  • Z. Li, Noah Snavely
  • Computer Science
  • 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition
  • 2018
TLDR
This paper explores a different approach to learning intrinsic images: observing image sequences over time depicting the same scene under changing illumination, and learning single-view decompositions that are consistent with these changes. Expand
...
1
2
3
4
5
...