UNISURF: Unifying Neural Implicit Surfaces and Radiance Fields for Multi-View Reconstruction

  title={UNISURF: Unifying Neural Implicit Surfaces and Radiance Fields for Multi-View Reconstruction},
  author={Michael Oechsle and Songyou Peng and Andreas Geiger},
  journal={2021 IEEE/CVF International Conference on Computer Vision (ICCV)},
Neural implicit 3D representations have emerged as a powerful paradigm for reconstructing surfaces from multi-view images and synthesizing novel views. Unfortunately, existing methods such as DVR or IDR require accurate per-pixel object masks as supervision. At the same time, neural radiance fields have revolutionized novel view synthesis. However, NeRF’s estimated volume density does not admit accurate surface reconstruction. Our key insight is that implicit surface models and radiance fields… 

Figures and Tables from this paper

NeuS: Learning Neural Implicit Surfaces by Volume Rendering for Multi-view Reconstruction

Experiments show that NeuS outperforms the state-of-the-arts in high-quality surface reconstruction, especially for objects and scenes with complex structures and self-occlusion, even for highly complex objects.

MonoSDF: Exploring Monocular Geometric Cues for Neural Implicit Surface Reconstruction

It is demonstrated that depth and normal cues, predicted by general-purpose monocular estimators, significantly improve reconstruction quality and optimization time, and geometric monocular priors improve performance both for small-scale single-object as well as large-scale multi-object scenes, independent of the choice of representation.

Critical Regularizations for Neural Surface Reconstruction in the Wild

RegSDF is presented, which shows that proper point cloud supervisions and geometry regularizations are sufficient to produce high-quality and robust reconstruction results, and is able to reconstruct surfaces with fine details even for open scenes with complex topologies and unstructured camera trajectories.

NeRS: Neural Reflectance Surfaces for Sparse-view 3D Reconstruction in the Wild

It is demonstrated that surface-based neural reconstructions enable learning from data, outperforming volumetric neural rendering-based reconstructions, and hoped that NeRS serves as a first step toward building scalable, high-quality libraries of real-world shape, materials, and illumination.

RegNeRF: Regularizing Neural Radiance Fields for View Synthesis from Sparse Inputs

This work observes that the majority of artifacts in sparse input scenarios are caused by errors in the estimated scene geometry, and by divergent behavior at the start of training, and addresses this by regularizing the geometry and appearance of patches rendered from unobserved viewpoints, and annealing the ray sampling space during training.

Geo-Neus: Geometry-Consistent Neural Implicit Surfaces Learning for Multi-view Reconstruction

This work theoretically analyze that there exists a gap between the volume rendering integral and point-based signed distance function (SDF) modeling and directly locate the zero-level set of SDF networks and explicitly perform multi-view geometry optimization by leveraging the sparse geometry from structure from motion (SFM) and photometric consistency in multi- view stereo.

Multi-View Mesh Reconstruction with Neural Deferred Shading

This work proposes an analysis-by-synthesis method for fast multi-view 3D reconstruction of opaque objects with arbitrary materials and illumination and investigates the shader to find that it learns an interpretable representation of appearance, enabling applications such as 3D material editing.

DoubleField: Bridging the Neural Surface and Radiance Fields for High-fidelity Human Reconstruction and Rendering

The efficacy of DoubleField is validated by the quantitative evaluations on several datasets and the qualitative results in a real-world sparse multi-view system, showing its superior capability for high-quality human model reconstruction and photo-realistic free-viewpoint human rendering.

Improving neural implicit surfaces geometry with patch warping

This paper proposes to add to the standard neural rendering optimization a direct photo-consistency term across the different views, called NeuralWarp, which outperforms state of the art unsupervised implicit surfaces reconstructions by over 20% on both datasets.

PS-NeRF: Neural Inverse Rendering for Multi-view Photometric Stereo

This paper presents a neural inverse rendering method for MVPS based on implicit representation that achieves far more accurate shape reconstruction than existing MVPS and neural rendering methods.



Neural Lumigraph Rendering

This work adopts high-capacity neural scene representations with periodic activations for jointly optimizing an implicit surface and a radiance field of a scene supervised exclusively with posed 2D images, enabling real-time rendering rates, while achieving unprecedented image quality compared to other surface methods.

GRAF: Generative Radiance Fields for 3D-Aware Image Synthesis

This paper proposes a generative model for radiance fields which have recently proven successful for novel view synthesis of a single scene, and introduces a multi-scale patch-based discriminator to demonstrate synthesis of high-resolution images while training the model from unposed 2D images alone.

NeRF: Representing Scenes as Neural Radiance Fields for View Synthesis

This work describes how to effectively optimize neural radiance fields to render photorealistic novel views of scenes with complicated geometry and appearance, and demonstrates results that outperform prior work on neural rendering and view synthesis.

Shape Reconstruction Using Volume Sweeping and Learned Photoconsistency

The ability of learning-based strategies to effectively benefit the reconstruction of arbitrary shapes with improved precision and robustness is investigated, showing that a CNN, trained on a standard static dataset, can help recover surface details on dynamic scenes that are not perceived by traditional 2D feature based methods.

Multiview Neural Surface Reconstruction by Disentangling Geometry and Appearance

This work introduces a neural network architecture that simultaneously learns the unknown geometry, camera parameters, and a neural renderer that approximates the light reflected from the surface towards the camera.

Differentiable Volumetric Rendering: Learning Implicit 3D Representations Without 3D Supervision

This work proposes a differentiable rendering formulation for implicit shape and texture representations, showing that depth gradients can be derived analytically using the concept of implicit differentiation, and finds that this method can be used for multi-view 3D reconstruction, directly resulting in watertight meshes.

NeRV: Neural Reflectance and Visibility Fields for Relighting and View Synthesis

We present a method that takes as input a set of images of a scene illuminated by unconstrained known lighting, and produces as output a 3D representation that can be rendered from novel viewpoints

DISN: Deep Implicit Surface Network for High-quality Single-view 3D Reconstruction

DISN, a Deep Implicit Surface Network which can generate a high-quality detail-rich 3D mesh from an 2D image by predicting the underlying signed distance fields by combining global and local features, achieves the state-of-the-art single-view reconstruction performance.

RayNet: Learning Volumetric 3D Reconstruction with Ray Potentials

This paper proposes RayNet, which combines a CNN that learns view-invariant feature representations with an MRF that explicitly encodes the physics of perspective projection and occlusion and trains RayNet end-to-end using empirical risk minimization.

NeRD: Neural Reflectance Decomposition from Image Collections

A neural reflectance decomposition (NeRD) technique that uses physically-based rendering to decompose the scene into spatially varying BRDF material properties enabling fast real-time rendering with novel illuminations.