UNISURF: Unifying Neural Implicit Surfaces and Radiance Fields for Multi-View Reconstruction

@article{Oechsle2021UNISURFUN,
  title={UNISURF: Unifying Neural Implicit Surfaces and Radiance Fields for Multi-View Reconstruction},
  author={Michael Oechsle and Songyou Peng and Andreas Geiger},
  journal={2021 IEEE/CVF International Conference on Computer Vision (ICCV)},
  year={2021},
  pages={5569-5579}
}
Neural implicit 3D representations have emerged as a powerful paradigm for reconstructing surfaces from multi-view images and synthesizing novel views. Unfortunately, existing methods such as DVR or IDR require accurate per-pixel object masks as supervision. At the same time, neural radiance fields have revolutionized novel view synthesis. However, NeRF’s estimated volume density does not admit accurate surface reconstruction. Our key insight is that implicit surface models and radiance fields… 

Figures and Tables from this paper

NeuS: Learning Neural Implicit Surfaces by Volume Rendering for Multi-view Reconstruction

Experiments show that NeuS outperforms the state-of-the-arts in high-quality surface reconstruction, especially for objects and scenes with complex structures and self-occlusion, even for highly complex objects.

MonoSDF: Exploring Monocular Geometric Cues for Neural Implicit Surface Reconstruction

It is demonstrated that depth and normal cues, predicted by general-purpose monocular estimators, improve reconstruction quality and optimization time and is observed that geometric monocular priors improve performance both for small-scale single-object as well as large-scale multi-object scenes, independent of the choice of representation.

RegNeRF: Regularizing Neural Radiance Fields for View Synthesis from Sparse Inputs

This work observes that the majority of artifacts in sparse input scenarios are caused by errors in the estimated scene geometry, and by divergent behavior at the start of training, and addresses this by regularizing the geometry and appearance of patches rendered from unobserved viewpoints, and annealing the ray sampling space during training.

Learning Neural Radiance Fields from Multi-View Geometry

This work proposes to leverage pixelwise depths and normals from a classical 3D reconstruction pipeline as geometric priors to guide NeRF optimization, and demonstrates the effectiveness of this approach in obtaining clean 3D meshes from images, while maintaining competitive performances in novel view synthesis.

NeRS: Neural Reflectance Surfaces for Sparse-view 3D Reconstruction in the Wild

It is demonstrated that surface-based neural reconstructions enable learning from data, outperforming volumetric neural rendering-based reconstructions, and hoped that NeRS serves as a first step toward building scalable, high-quality libraries of real-world shape, materials, and illumination.

Geo-Neus: Geometry-Consistent Neural Implicit Surfaces Learning for Multi-view Reconstruction

This work theoretically analyze that there exists a gap between the volume rendering integral and point-based signed distance function (SDF) modeling and directly locate the zero-level set of SDF networks and explicitly perform multi-view geometry optimization by leveraging the sparse geometry from structure from motion (SFM) and photometric consistency in multi- view stereo.

Multi-View Mesh Reconstruction with Neural Deferred Shading

This work proposes an analysis-by-synthesis method for fast multi-view 3D reconstruction of opaque objects with arbitrary materials and illumination and investigates the shader to find that it learns an interpretable representation of appearance, enabling applications such as 3D material editing.

Critical Regularizations for Neural Surface Reconstruction in the Wild

RegSDF is presented, which shows that proper point cloud supervisions and geometry regularizations are sufficient to produce high-quality and robust reconstruction results, and is able to reconstruct surfaces with fine details even for open scenes with complex topologies and unstructured camera trajectories.

PS-NeRF: Neural Inverse Rendering for Multi-view Photometric Stereo

This paper presents a neural inverse rendering method for MVPS based on implicit representation that achieves far more accurate shape reconstruction than existing MVPS and neural rendering methods.

A Hybrid Mesh-neural Representation for 3D Transparent Object Reconstruction

We propose a novel method to reconstruct the 3D shapes of transparent objects using hand-held captured images under natural light conditions. It combines the advantage of explicit mesh and
...

References

SHOWING 1-10 OF 67 REFERENCES

Neural Lumigraph Rendering

This work adopts high-capacity neural scene representations with periodic activations for jointly optimizing an implicit surface and a radiance field of a scene supervised exclusively with posed 2D images, enabling real-time rendering rates, while achieving unprecedented image quality compared to other surface methods.

GRAF: Generative Radiance Fields for 3D-Aware Image Synthesis

This paper proposes a generative model for radiance fields which have recently proven successful for novel view synthesis of a single scene, and introduces a multi-scale patch-based discriminator to demonstrate synthesis of high-resolution images while training the model from unposed 2D images alone.

NeRF: Representing Scenes as Neural Radiance Fields for View Synthesis

This work describes how to effectively optimize neural radiance fields to render photorealistic novel views of scenes with complicated geometry and appearance, and demonstrates results that outperform prior work on neural rendering and view synthesis.

Shape Reconstruction Using Volume Sweeping and Learned Photoconsistency

The ability of learning-based strategies to effectively benefit the reconstruction of arbitrary shapes with improved precision and robustness is investigated, showing that a CNN, trained on a standard static dataset, can help recover surface details on dynamic scenes that are not perceived by traditional 2D feature based methods.

Multiview Neural Surface Reconstruction by Disentangling Geometry and Appearance

This work introduces a neural network architecture that simultaneously learns the unknown geometry, camera parameters, and a neural renderer that approximates the light reflected from the surface towards the camera.

Differentiable Volumetric Rendering: Learning Implicit 3D Representations Without 3D Supervision

This work proposes a differentiable rendering formulation for implicit shape and texture representations, showing that depth gradients can be derived analytically using the concept of implicit differentiation, and finds that this method can be used for multi-view 3D reconstruction, directly resulting in watertight meshes.

NeRV: Neural Reflectance and Visibility Fields for Relighting and View Synthesis

We present a method that takes as input a set of images of a scene illuminated by unconstrained known lighting, and produces as output a 3D representation that can be rendered from novel viewpoints

DISN: Deep Implicit Surface Network for High-quality Single-view 3D Reconstruction

DISN, a Deep Implicit Surface Network which can generate a high-quality detail-rich 3D mesh from an 2D image by predicting the underlying signed distance fields by combining global and local features, achieves the state-of-the-art single-view reconstruction performance.

D-NeRF: Neural Radiance Fields for Dynamic Scenes

D-NeRF is introduced, a method that extends neural radiance fields to a dynamic domain, allowing to reconstruct and render novel images of objects under rigid and non-rigid motions from a single camera moving around the scene.

NeRD: Neural Reflectance Decomposition from Image Collections

A neural reflectance decomposition (NeRD) technique that uses physically-based rendering to decompose the scene into spatially varying BRDF material properties enabling fast real-time rendering with novel illuminations.
...