• Corpus ID: 246706356

Block-NeRF: Scalable Large Scene Neural View Synthesis

@article{Tancik2022BlockNeRFSL,
  title={Block-NeRF: Scalable Large Scene Neural View Synthesis},
  author={Matthew Tancik and Vincent Casser and Xinchen Yan and Sabeek Pradhan and Ben Mildenhall and Pratul P. Srinivasan and Jonathan T. Barron and Henrik Kretzschmar},
  journal={ArXiv},
  year={2022},
  volume={abs/2202.05263}
}
We present Block-NeRF, a variant of Neural Radiance Fields that can represent large-scale environments. Specif-ically, we demonstrate that when scaling NeRF to render city-scale scenes spanning multiple blocks, it is vital to de-compose the scene into individually trained NeRFs. This decomposition decouples rendering time from scene size, enables rendering to scale to arbitrarily large environments, and allows per-block updates of the environment. We adopt several architectural changes to make… 
Mega-NeRF: Scalable Construction of Large-Scale NeRFs for Virtual Fly-Throughs
TLDR
A simple geometric clustering algorithm is introduced that achieves a 40x speedup over conventional NeRF rendering while remaining within 0.5 db in PSNR quality, exceeding the fidelity of existing fast renderers.
Decomposing NeRF for Editing via Feature Field Distillation
TLDR
This work tackles the problem of semantic scene decomposition of NeRFs to enable query-based local editing of the represented 3D scenes, and distill the knowledge of off-the-shelf, self-supervised 2D image feature extractors into a 3D feature field optimized in parallel to the radiance field.
Gaussian Activated Neural Radiance Fields for High Fidelity Reconstruction & Pose Estimation
TLDR
Gaussian Activated neural Radiance Fields (GARF) is presented as a new positional embedding-free neural radiance field architecture – employing Gaussian activations – that outperforms the current state-of-the-art in terms of high fidelity reconstruction and pose estimation.
Rotation-Equivariant Conditional Spherical Neural Fields for Learning a Natural Illumination Prior
TLDR
This work proposes a conditional neural representation based on a variational auto-decoder with a SIREN network and, extending Vector Neurons, builds equivariance directly into the network and develops a rotation-equivariant, high dynamic range (HDR) neural illumination model that is compact and able to express complex, high-frequency features of natural environment maps.
GARF: Gaussian Activated Radiance Fields for High Fidelity Reconstruction and Pose Estimation
TLDR
Gaussian Activated neural Radiance Fields (GARF) is presented as a new positional embedding-free neural radiance field architecture – employing Gaussian activations – that outperforms the current state-of-the-art in terms of high fidelity reconstruction and pose estimation.
Simple and Effective Synthesis of Indoor 3D Scenes
TLDR
This work proposes a simple alternative: an image-toimage GAN that maps directly from reprojections of incomplete point clouds to full high-resolution RGB-D images, and shows that this model is useful for generative data augmentation.
ADOP: Approximate Differentiable One-Pixel Point Rendering
TLDR
ADOP, a novel point-based, differentiable neural rendering pipeline, which contains a fully differentiable physically-based photometric camera model, can smoothly handle input images with varying exposure and white balance, and generates high-dynamic range output.
Enhancement of Novel View Synthesis Using Omnidirectional Image Completion
TLDR
Experiments indicate that the proposed method to train NeRF while dynamically selecting a sparse set of completed images can synthesize plausible novel views while preserving the features of the scene for both artificial and real-world data.
DeVRF: Fast Deformable Voxel Radiance Fields for Dynamic Scenes
TLDR
Experiments demonstrate that DeVRF achieves two orders of magnitude speedup ( 100× faster ) with on-par high-fidelity results compared to the previous state-of-the-art approaches.
Remote Sensing Novel View Synthesis with Implicit Multiplane Representations
TLDR
This paper proposes a novel remote sensing view synthesis method by leveraging the recent advances in implicit neural representations, known as Implicit Multiplane Images (ImMPI), and proposes a new dataset for remote sensing novel view synthesis with multi-view real-world google earth images.
...
...

References

SHOWING 1-10 OF 82 REFERENCES
NeRF in the Wild: Neural Radiance Fields for Unconstrained Photo Collections
TLDR
A learning-based method for synthesizing novel views of complex scenes using only unstructured collections of in-the-wild photographs, and applies it to internet photo collections of famous landmarks, to demonstrate temporally consistent novel view renderings that are significantly closer to photorealism than the prior state of the art.
NeRF: Representing Scenes as Neural Radiance Fields for View Synthesis
TLDR
This work describes how to effectively optimize neural radiance fields to render photorealistic novel views of scenes with complicated geometry and appearance, and demonstrates results that outperform prior work on neural rendering and view synthesis.
Mip-NeRF 360: Unbounded Anti-Aliased Neural Radiance Fields
TLDR
This work presents an extension of mip-NeRF (a NeRF variant that addresses sampling and aliasing) that uses a non-linear scene parameterization, online distillation, and a novel distortion-based regularizer to overcome the challenges presented by unbounded scenes.
Baking Neural Radiance Fields for Real-Time View Synthesis
TLDR
A method to train a NeRF, then precompute and store it as a novel representation called a Sparse Neural Radiance Grid (SNeRG) that enables real-time rendering on commodity hardware and retains NeRF’s ability to render fine geometric details and view-dependent appearance.
KiloNeRF: Speeding up Neural Radiance Fields with Thousands of Tiny MLPs
TLDR
It is demonstrated that real-time rendering is possible by utilizing thousands of tiny MLPs instead of one single large MLP, and using teacher-student distillation for training, this speed-up can be achieved without sacrificing visual quality.
Mip-NeRF: A Multiscale Representation for Anti-Aliasing Neural Radiance Fields
TLDR
By efficiently rendering anti-aliased conical frustums instead of rays, mip-NeRF reduces objectionable aliasing artifacts and significantly improves NeRF’s ability to represent fine details, while also being 7% faster than NeRF and half the size.
Learning Object-Compositional Neural Radiance Field for Editable Scene Rendering
TLDR
A novel neural scene rendering system, which learns an object-compositional neural radiance field and produces realistic rendering with editing capability for a clustered and real-world scene.
NeRF++: Analyzing and Improving Neural Radiance Fields
TLDR
A parametrization issue involved in applying NeRF to 360 captures of objects within large-scale, unbounded 3D scenes is addressed, and the method improves view synthesis fidelity in this challenging scenario.
DeRF: Decomposed Radiance Fields
TLDR
This paper proposes to spatially decompose a scene and dedicate smaller networks for each decomposed part, and shows that a Voronoi spatial decomposition is preferable for this purpose, as it is provably compatible with the Painter’s Algorithm for efficient and GPU-friendly rendering.
BARF: Bundle-Adjusting Neural Radiance Fields
TLDR
Bundle-Adjusting Neural Radiance Fields (BARF) is proposed for training NeRF from imperfect (or even unknown) camera poses — the joint problem of learning neural 3D representations and registering camera frames and it is shown that coarse-to-fine registration is also applicable to NeRF.
...
...