• Corpus ID: 244921004

Dense Depth Priors for Neural Radiance Fields from Sparse Input Views

@article{Roessle2021DenseDP,
  title={Dense Depth Priors for Neural Radiance Fields from Sparse Input Views},
  author={Barbara Roessle and Jonathan T. Barron and Ben Mildenhall and Pratul P. Srinivasan and Matthias Nie{\ss}ner},
  journal={ArXiv},
  year={2021},
  volume={abs/2112.03288}
}
Neural radiance fields (NeRF) encode a scene into a neural representation that enables photo-realistic rendering of novel views. However, a successful reconstruction from RGB images requires a large number of input views taken under static conditions — typically up to a few hundred images for room-size scenes. Our method aims to synthesize novel views of whole rooms from an order of magnitude fewer images. To this end, we leverage dense depth priors in order to constrain the NeRF optimization… 

Figures and Tables from this paper

RC-MVSNet: Unsupervised Multi-View Stereo with Neural Rendering
TLDR
This work proposes a novel approach with neural rendering (RC-MVSNet) to solve ambiguity issues of correspondences among views of unsupervised Multi-View Stereo, imposing a depth rendering consistency loss to constrain the geometry features close to the object surface to alleviate occlusions.
Sat-NeRF: Learning Multi-View Satellite Photogrammetry With Transient Objects and Shadow Modeling Using RPC Cameras
TLDR
The Satellite Neural Radiance Field (Sat-NeRF), a new end-to-end model for learning multi-view satellite photogrammetry in the wild, is introduced and the advantages of applying a bundle adjustment to the satellite camera models prior to training are stressed.
Advances in neural rendering
TLDR
Loss functions for Neural Rendering Jun-Yan Zhu shows the importance of knowing the number of neurons in the system and how many neurons are firing at the same time.
NeRFReN: Neural Radiance Fields with Reflections
TLDR
This work proposes to split a scene into transmitted and reflected components, and model the two components with separate neural radiance fields, and proposes to exploit geometric priors and apply carefully-designed training strategies to achieve reasonable decomposition results.

References

SHOWING 1-10 OF 31 REFERENCES
Structure-from-Motion Revisited
TLDR
This work proposes a new SfM technique that improves upon the state of the art to make a further step towards building a truly general-purpose pipeline.
Depth-supervised NeRF: Fewer Views and Faster Training for Free
TLDR
This work formalizes the above assumption through DS-NeRF (Depth-supervised Neural Radiance Fields), a loss for learning radiance that takes advantage of readily-available depth supervision and can render better images given fewer training views while training 2-3x faster.
NerfingMVS: Guided Optimization of Neural Radiance Fields for Indoor Multi-view Stereo
TLDR
A new multi-view depth estimation method that utilizes both conventional SfM reconstruction and learning-based priors over the recently proposed neural radiance fields (NeRF), with surprising findings presented on the effectiveness of correspondence-based opti-mization and NeRF-based optimization over the adapted depth priors.
Deep Residual Learning for Image Recognition
TLDR
This work presents a residual learning framework to ease the training of networks that are substantially deeper than those used previously, and provides comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth.
NeRF: Representing Scenes as Neural Radiance Fields for View Synthesis
TLDR
This work describes how to effectively optimize neural radiance fields to render photorealistic novel views of scenes with complicated geometry and appearance, and demonstrates results that outperform prior work on neural rendering and view synthesis.
NeRF in the Wild: Neural Radiance Fields for Unconstrained Photo Collections
TLDR
A learning-based method for synthesizing novel views of complex scenes using only unstructured collections of in-the-wild photographs, and applies it to internet photo collections of famous landmarks, to demonstrate temporally consistent novel view renderings that are significantly closer to photorealism than the prior state of the art.
Free View Synthesis
TLDR
This work presents a method for novel view synthesis from input images that are freely distributed around a scene that can synthesize images for free camera movement through the scene, and works for general scenes with unconstrained geometric layouts.
Learning Depth with Convolutional Spatial Propagation Network
TLDR
This paper applies the convolutional spatial propagation network (CSPN) to two depth estimation problems: depth completion and stereo matching, and design modules which adapts the original 2D CSPN to embed sparse depth samples during the propagation, operate with 3D convolution and be synergistic with spatial pyramid pooling.
SG-NN: Sparse Generative Neural Networks for Self-Supervised Scene Completion of RGB-D Scans
TLDR
A novel approach that converts partial and noisy RGB-D scans into high-quality 3D scene reconstructions by inferring unobserved scene geometry and combined with a new 3D sparse generative convolutional neural network architecture is able to predict highly detailed surfaces in a coarse-to-fine hierarchical fashion.
DeepView: View Synthesis With Learned Gradient Descent
TLDR
This work presents a novel approach to view synthesis using multiplane images (MPIs) that incorporates occlusion reasoning, improving performance on challenging scene features such as object boundaries, lighting reflections, thin structures, and scenes with high depth complexity.
...
1
2
3
4
...