• Corpus ID: 244527221

Extracting Triangular 3D Models, Materials, and Lighting From Images

  title={Extracting Triangular 3D Models, Materials, and Lighting From Images},
  author={Jacob Munkberg and Jon Hasselgren and Tianchang Shen and Jun Gao and Wenzheng Chen and Alex Evans and Thomas M{\"u}ller and Sanja Fidler},
We present an efficient method for joint optimization of topology, materials and lighting from multi-view image observations. Unlike recent multi-view reconstruction approaches, which typically produce entangled 3D representations encoded in neural networks, we output triangle meshes with spatially-varying materials and environment lighting that can be deployed in any traditional graphics engine unmodified. We leverage recent work in differentiable rendering, coordinate-based networks to… 
Unbiased Inverse Volume Rendering with Differential Trackers
Differential ratio tracking combines tracking and reservoir sampling to estimate gradients by sampling distances proportional to the unweighted transmittance rather than the usual extinction-weightedtransmittance, which yields low-variance gradients and runs in linear time.
Differentiable Signed Distance Function Rendering
These experiments demonstrate the reconstruc- tion of (synthetic) objects without complex regularization or priors, using only a per-pixel RGB loss, that outperforms prior work.
NeILF: Neural Incident Light Field for Physically-based Material Estimation
We present a differentiable rendering framework for material and lighting estimation from multi-view images and a reconstructed geometry. In the framework, we represent scene lightings as the Neural


Microfacet Models for Refraction through Rough Surfaces
This paper reviews microfacet theory and demonstrates how it can be extended to simulate transmission through rough surfaces such as etched glass, and describes efficient schemes for sampling the microf acet models and the corresponding probability density functions.
NeRD: Neural Reflectance Decomposition from Image Collections
A neural reflectance decomposition (NeRD) technique that uses physically-based rendering to decompose the scene into spatially varying BRDF material properties enabling fast real-time rendering with novel illuminations.
NeRFactor: Neural Factorization of Shape and Reflectance Under an Unknown Illumination
Qualitative and quantitative experiments show that NeRFactor outperforms classic and deep learning-based state of the art across various tasks.
NeuS: Learning Neural Implicit Surfaces by Volume Rendering for Multi-view Reconstruction
Experiments show that NeuS outperforms the state-of-the-arts in high-quality surface reconstruction, especially for objects and scenes with complex structures and self-occlusion, even for highly complex objects.
PhySG: Inverse Rendering with Spherical Gaussians for Physics-based Material Editing and Relighting
PhySG is presented, an end-to-end inverse rendering pipeline that includes a fully differentiable renderer, and can reconstruct geometry, materials, and illumination from scratch from a set of images, and demonstrates, with both synthetic and real data, that it can enable rendering of novel viewpoints, but also physics-based appearance editing of materials and illumination.
Differentiable Volumetric Rendering: Learning Implicit 3D Representations Without 3D Supervision
This work proposes a differentiable rendering formulation for implicit shape and texture representations, showing that depth gradients can be derived analytically using the concept of implicit differentiation, and finds that this method can be used for multi-view 3D reconstruction, directly resulting in watertight meshes.
Modular primitives for high-performance differentiable rendering
A modular differentiable renderer design that yields performance superior to previous methods by leveraging existing, highly optimized hardware graphics pipelines, and allows custom, high-performance graphics pipelines to be built directly within automatic differentiation frameworks such as PyTorch or TensorFlow.
NeRF: Representing Scenes as Neural Radiance Fields for View Synthesis
This work describes how to effectively optimize neural radiance fields to render photorealistic novel views of scenes with complicated geometry and appearance, and demonstrates results that outperform prior work on neural rendering and view synthesis.
Deep Marching Cubes: Learning Explicit Surface Representations
This paper demonstrates that the marching cubes algorithm is not differentiable and proposes an alternative differentiable formulation which is inserted as a final layer into a 3D convolutional neural network, and proposes a set of loss functions which allow for training the model with sparse point supervision.
Adam: A Method for Stochastic Optimization
This work introduces Adam, an algorithm for first-order gradient-based optimization of stochastic objective functions, based on adaptive estimates of lower-order moments, and provides a regret bound on the convergence rate that is comparable to the best known results under the online convex optimization framework.