SinNeRF: Training Neural Radiance Fields on Complex Scenes from a Single Image

@article{Xu2022SinNeRFTN,
  title={SinNeRF: Training Neural Radiance Fields on Complex Scenes from a Single Image},
  author={Dejia Xu and Yi-fan Jiang and Peihao Wang and Zhiwen Fan and Humphrey Shi and Zhangyang Wang},
  journal={ArXiv},
  year={2022},
  volume={abs/2204.00928}
}
. Despite the rapid development of Neural Radiance Field (NeRF), the necessity of dense covers largely prohibits its wider applications. While several recent works have attempted to address this issue, they either operate with sparse views (yet still, a few of them) or on simple objects/scenes. In this work, we consider a more ambi-tious task: training neural radiance field, over realistically complex visual scenes, by “looking only once”, i.e., using only a single view. To attain this goal, we… 

Figures and Tables from this paper

References

SHOWING 1-10 OF 50 REFERENCES
pixelNeRF: Neural Radiance Fields from One or Few Images
We propose pixelNeRF, a learning framework that predicts a continuous neural scene representation conditioned on one or few input images. The existing approach for constructing neural radiance fields
RegNeRF: Regularizing Neural Radiance Fields for View Synthesis from Sparse Inputs
TLDR
This work ad-ditionally uses a normalizing flow model to regularize the color of unobserved viewpoints in NeRF, and outperforms not only other methods that optimize over a single scene, but in many cases also conditional models that are extensively pre-trained on large multi-view datasets.
NeRF: Representing Scenes as Neural Radiance Fields for View Synthesis
TLDR
This work describes how to effectively optimize neural radiance fields to render photorealistic novel views of scenes with complicated geometry and appearance, and demonstrates results that outperform prior work on neural rendering and view synthesis.
Putting NeRF on a Diet: Semantically Consistent Few-Shot View Synthesis
TLDR
DietNeRF improves the perceptual quality of few-shot view synthesis when learned from scratch, can render novel views with as few as one observed image when pre-trained on a multi-view dataset, and produces plausible completions of completely unobserved regions.
MVSNeRF: Fast Generalizable Radiance Field Reconstruction from Multi-View Stereo
TLDR
This work proposes a generic deep neural network that can reconstruct radiance fields from only three nearby input views via fast network inference, and leverages plane-swept cost volumes for geometry-aware scene reasoning, and combines this with physically based volume rendering for neural radiance field reconstruction.
Depth-supervised NeRF: Fewer Views and Faster Training for Free
TLDR
This work formalizes the above assumption through DS-NeRF (Depth-supervised Neural Radiance Fields), a loss for learning radiance that takes advantage of readily-available depth supervision and can render better images given fewer training views while training 2-3x faster.
GRAF: Generative Radiance Fields for 3D-Aware Image Synthesis
TLDR
This paper proposes a generative model for radiance fields which have recently proven successful for novel view synthesis of a single scene, and introduces a multi-scale patch-based discriminator to demonstrate synthesis of high-resolution images while training the model from unposed 2D images alone.
KiloNeRF: Speeding up Neural Radiance Fields with Thousands of Tiny MLPs
TLDR
It is demonstrated that real-time rendering is possible by utilizing thousands of tiny MLPs instead of one single large MLP, and using teacher-student distillation for training, this speed-up can be achieved without sacrificing visual quality.
GRF: Learning a General Radiance Field for 3D Representation and Rendering
TLDR
A simple yet powerful neural network that implicitly represents and renders 3D objects and scenes only from 2D observations, which can generate high-quality and realistic novel views for novel objects, unseen categories and challenging real-world scenes.
Mip-NeRF: A Multiscale Representation for Anti-Aliasing Neural Radiance Fields
TLDR
By efficiently rendering anti-aliased conical frustums instead of rays, mip-NeRF reduces objectionable aliasing artifacts and significantly improves NeRF’s ability to represent fine details, while also being 7% faster than NeRF and half the size.
...
...