PhySG: Inverse Rendering with Spherical Gaussians for Physics-based Material Editing and Relighting

@article{Zhang2021PhySGIR,
  title={PhySG: Inverse Rendering with Spherical Gaussians for Physics-based Material Editing and Relighting},
  author={Kai Zhang and Fujun Luan and Qianqian Wang and Kavita Bala and Noah Snavely},
  journal={2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
  year={2021},
  pages={5449-5458}
}
  • Kai Zhang, Fujun Luan, Noah Snavely
  • Published 1 April 2021
  • Computer Science
  • 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)
We present PhySG, an end-to-end inverse rendering pipeline that includes a fully differentiable renderer, and can reconstruct geometry, materials, and illumination from scratch from a set of images. Our framework represents specular BRDFs and environmental illumination using mixtures of spherical Gaussians, and represents geometry as a signed distance function parameterized as a Multi-Layer Perceptron. The use of spherical Gaussians allows us to efficiently solve for approximate light transport… 

Figures and Tables from this paper

IRON: Inverse Rendering by Optimizing Neural SDFs and Materials from Photometric Images

This work proposes a neural inverse rendering pipeline that operates on photometric images and outputs high-quality 3D content in the format of triangle meshes and material textures readily deployable in existing graphics pipelines and achieves significantly better inverse rendering quality compared to prior works.

Shape, Light & Material Decomposition from Images using Monte Carlo Rendering and Denoising

An efficient method to jointly reconstruct geometry, materials, and lighting, which substantially improves material and light separation compared to previous work and argues that denoising can become an integral part of high quality inverse rendering pipelines.

Neural Ray-Tracing: Learning Surfaces and Reflectance for Relighting and View Synthesis

The proposed approach for scene editing, relighting and reflectance estimation learned from synthetic and captured views on existing datasets is validated and outperforms existing neural rendering methods for relighting under known lighting conditions, and produces realistic reconstructions for relit and edited scenes.

NeILF: Neural Incident Light Field for Physically-based Material Estimation

We present a differentiable rendering framework for material and lighting estimation from multi-view images and a reconstructed geometry. In the framework, we represent scene lightings as the Neural

Modeling Indirect Illumination for Inverse Rendering

This paper proposes a novel approach to efficiently recovering spatially-varying indirect illumination, which can be conveniently derived from the neural radiance field learned from input images instead of being estimated jointly with direct illumination and materials.

Advances in Neural Rendering

This state‐of‐the‐art report on advances in neural rendering focuses on methods that combine classical rendering principles with learned 3D scene representations, often now referred to as neural scene representations.

Multiview Textured Mesh Recovery by Differentiable Rendering

A differentiable Poisson Solver is employed to represent the object’s shape, which is able to produce topology-agnostic and watertight surfaces, and a physically based inverse rendering scheme is introduced to jointly estimate the environment lighting and object's reflectance, which are able to render the high resolution image at real-time.

PS-NeRF: Neural Inverse Rendering for Multi-view Photometric Stereo

This paper presents a neural inverse rendering method for MVPS based on implicit representation that achieves far more accurate shape reconstruction than existing MVPS and neural rendering methods.

NeROIC: Neural Rendering of Objects from Online Image Collections

This work presents a novel method to acquire object representations from online image collections, capturing high-quality geometry and material properties of arbitrary objects from photographs with varying cameras, illumination, and backgrounds, and introduces a robust normal estimation technique which eliminates the effect of geometric noise.

Efficient Textured Mesh Recovery from Multiple Views with Differentiable Rendering

An efficient coarse-to-fine approach to recover the textured mesh from multi-view images by taking advantage of a differentiable Poisson Solver to represent the shape, which is able to produce topology-agnostic and watertight surfaces.

References

SHOWING 1-10 OF 54 REFERENCES

Deep Reflectance Volumes: Relightable Reconstructions from Multi-View Photometric Images

Deep Reflectance Volumes presents a novel physically-based differentiable volume ray marching framework to render scene volumes under arbitrary viewpoint and lighting, rendering photorealistic images that are significantly better than state-of-the-art mesh-based methods.

Inverse Rendering for Complex Indoor Scenes: Shape, Spatially-Varying Lighting and SVBRDF From a Single Image

A deep inverse rendering framework for indoor scenes, which combines novel methods to map complex materials to existing indoor scene datasets and a new physically-based GPU renderer to create a large-scale, photorealistic indoor dataset.

Multiview Neural Surface Reconstruction with Implicit Lighting and Material

This work introduces a neural network architecture that simultaneously learns the unknown geometry, camera parameters, and a neural renderer that approximates the light reflected from the surface towards the camera.

Accurate Translucent Material Rendering under Spherical Gaussian Lights

In this paper we present a new algorithm for accurate rendering of translucent materials under Spherical Gaussian (SG) lights. Our algorithm builds upon the quantized‐diffusion BSSRDF model recently

Intrinsic3D: High-Quality 3D Reconstruction by Joint Appearance and Geometry Optimization with Spatially-Varying Lighting

A joint surface reconstruction approach that is based on Shape-from-Shading (SfS) techniques and utilizes the estimation of spatially-varying spherical harmonics (SVSH) from subvolumes of the reconstructed scene.

Neural Reflectance Fields for Appearance Acquisition

It is demonstrated that neural reflectance fields can be estimated from images captured with a simple collocated camera-light setup, and accurately model the appearance of real-world scenes with complex geometry and reflectance, and enable a complete pipeline from high-quality and practical appearance acquisition to 3D scene composition and rendering.

An efficient representation for irradiance environment maps

A simple and efficient procedural rendering algorithm amenable to hardware implementation, a prefiltering method up to three orders of magnitude faster than previous techniques, and new representations for lighting design and image-based rendering are considered.

Shading-based refinement on volumetric signed distance functions

A novel method to obtain fine-scale detail in 3D reconstructions generated with low-budget RGB-D cameras or other commodity scanning devices, and forms the inverse shading problem on the volumetric distance field, and presents a novel objective function which jointly optimizes forfine-scale surface geometry and spatially-varying surface reflectance.

Relighting objects from image collections

Using an all-frequency relighting framework based on wavelets, an approach for recovering the reflectance of a static scene with known geometry from a collection of images taken under distant, unknown illumination is presented.

Differentiable Monte Carlo ray tracing through edge sampling

This work introduces a general-purpose differentiable ray tracer, which is the first comprehensive solution that is able to compute derivatives of scalar functions over a rendered image with respect to arbitrary scene parameters such as camera pose, scene geometry, materials, and lighting parameters.
...