• Corpus ID: 244773517

RegNeRF: Regularizing Neural Radiance Fields for View Synthesis from Sparse Inputs

@article{Niemeyer2021RegNeRFRN,
  title={RegNeRF: Regularizing Neural Radiance Fields for View Synthesis from Sparse Inputs},
  author={Michael Niemeyer and Jonathan T. Barron and Ben Mildenhall and Mehdi S. M. Sajjadi and Andreas Geiger and Noha Radwan},
  journal={ArXiv},
  year={2021},
  volume={abs/2112.00724}
}
Neural Radiance Fields (NeRF) have emerged as a powerful representation for the task of novel view synthesis due to their simplicity and state-of-the-art performance. Though NeRF can produce photorealistic renderings of unseen viewpoints when many input views are available, its performance drops significantly when this number is re-duced. We observe that the majority of artifacts in sparse input scenarios are caused by errors in the estimated scene geometry, and by divergent behavior at the… 

Figures and Tables from this paper

SinNeRF: Training Neural Radiance Fields on Complex Scenes from a Single Image
TLDR
SinNeRF constructs a semi-supervised learning process, where it introduces and propagate geometry pseudo labels and semantic pseudo labels to guide the progressive training process, and shows that even without pre-training on multi-view datasets, SinNeRF can yield photo-realistic novel-view synthesis results.
Ray Priors through Reprojection: Improving Neural Radiance Fields for Novel View Extrapolation
TLDR
This paper studies the novel view extrapolation setting that (1) the training images can well describe an object, and (2) there is a notable discrepancy between the training and test viewpoints’ distributions, and proposes a random ray casting policy that allows training unseen views using seen views.
NeRF, meet differential geometry!
TLDR
This work shows how a direct mathematical formalism of previously proposed NeRF variants aimed at improving the performance in challenging conditions can be used to natively encourage the regularity of surfaces (by means of Gaussian and Mean Curvatures) making it possible, for example, to learn surfaces from a very limited number of views.
Decomposing NeRF for Editing via Feature Field Distillation
TLDR
This work tackles the problem of semantic scene decomposition of NeRFs to enable query-based local editing of the represented 3D scenes, and distill the knowledge of off-the-shelf, self-supervised 2D image feature extractors into a 3D feature field optimized in parallel to the radiance field.
View Synthesis with Sculpted Neural Points
TLDR
A novel technique called “Sculpted Neural Points (SNP)” is introduced, which improves the robustness to errors and holes in the reconstructed point cloud and closes the gap between point-based and implicit representation-based methods.
MonoSDF: Exploring Monocular Geometric Cues for Neural Implicit Surface Reconstruction
TLDR
It is demonstrated that depth and normal cues, predicted by general-purpose monocular estimators, significantly improve reconstruction quality and optimization time, and geometric monocular priors improve performance both for small-scale single-object as well as large-scale multi-object scenes, independent of the choice of representation.
ARF: Artistic Radiance Fields
TLDR
A novel deferred back-propagation method to enable optimization of memory-intensive radiance fields using style losses defined on full-resolution rendered images and a nearest neighbor-based loss that is highly effective at capturing style details while maintaining multi-view consistency are proposed.
Learning Generalizable Light Field Networks from Few Images
TLDR
This work explores a new strategy for few-shot novel view synthesis based on a neural light field representation that achieves competitive performances on synthetic and real MVS data with respect to state-of-the-art neural radiance based competition, while offering a 100 times faster rendering.
D2NeRF: Self-Supervised Decoupling of Dynamic and Static Objects from a Monocular Video
TLDR
This work introduces Decoupled Dynamic Neural Radiance Field (D 2 NeRF), a self-supervised approach that takes a monocular video and learns a 3D scene representation which decouples moving objects, including their shadows, from the static background.
Controllable 3D Face Synthesis with Conditional Generative Occupancy Fields
TLDR
A new NeRF-based conditional 3D face synthesis framework is proposed, which enables 3D controllability over the generated face images by imposing explicit 3D conditions from3D face priors and effectively enforces the shape of thegenerated face to commit to a given 3D Morphable Model (3DMM) mesh.
...
...

References

SHOWING 1-10 OF 67 REFERENCES
NeRF: Representing Scenes as Neural Radiance Fields for View Synthesis
TLDR
This work describes how to effectively optimize neural radiance fields to render photorealistic novel views of scenes with complicated geometry and appearance, and demonstrates results that outperform prior work on neural rendering and view synthesis.
Stereo Radiance Fields (SRF): Learning View Synthesis for Sparse Views of Novel Scenes
TLDR
Stereo Radiance Fields is introduced, a neural view synthesis approach that is trained end-to-end, generalizes to new scenes, and requires only sparse views at test time, andExperiments show that SRF learns structure instead of over-fitting on a scene, achieving significantly sharper, more detailed results than scene-specific models.
Baking Neural Radiance Fields for Real-Time View Synthesis
TLDR
A method to train a NeRF, then precompute and store it as a novel representation called a Sparse Neural Radiance Grid (SNeRG) that enables real-time rendering on commodity hardware and retains NeRF’s ability to render fine geometric details and view-dependent appearance.
pixelNeRF: Neural Radiance Fields from One or Few Images
We propose pixelNeRF, a learning framework that predicts a continuous neural scene representation conditioned on one or few input images. The existing approach for constructing neural radiance fields
Unconstrained Scene Generation with Locally Conditioned Radiance Fields
TLDR
Generative Scene Networks is introduced, which learns to decompose scenes into a collection of many local radiance fields that can be rendered from a free moving camera, and which produces quantitatively higher-quality scene renderings across several different scene datasets.
MVSNeRF: Fast Generalizable Radiance Field Reconstruction from Multi-View Stereo
TLDR
This work proposes a generic deep neural network that can reconstruct radiance fields from only three nearby input views via fast network inference, and leverages plane-swept cost volumes for geometry-aware scene reasoning, and combines this with physically based volume rendering for neural radiance field reconstruction.
IBRNet: Learning Multi-View Image-Based Rendering
TLDR
A method that synthesizes novel views of complex scenes by interpolating a sparse set of nearby views using a network architecture that includes a multilayer perceptron and a ray transformer that estimates radiance and volume density at continuous 5D locations.
Neural Rays for Occlusion-aware Image-based Rendering
TLDR
A new neural representation, called Neural Ray (NeuRay), is presented, which achieves state-of-the-art performance on the novel view synthesis task when generalizing to unseen scenes and outperforms per-scene optimization methods after finetuning.
NeRF in the Wild: Neural Radiance Fields for Unconstrained Photo Collections
TLDR
A learning-based method for synthesizing novel views of complex scenes using only unstructured collections of in-the-wild photographs, and applies it to internet photo collections of famous landmarks, to demonstrate temporally consistent novel view renderings that are significantly closer to photorealism than the prior state of the art.
Putting NeRF on a Diet: Semantically Consistent Few-Shot View Synthesis
TLDR
DietNeRF improves the perceptual quality of few-shot view synthesis when learned from scratch, can render novel views with as few as one observed image when pre-trained on a multi-view dataset, and produces plausible completions of completely unobserved regions.
...
...