Mip-NeRF: A Multiscale Representation for Anti-Aliasing Neural Radiance Fields

@article{Barron2021MipNeRFAM,
  title={Mip-NeRF: A Multiscale Representation for Anti-Aliasing Neural Radiance Fields},
  author={Jonathan T. Barron and Ben Mildenhall and Matthew Tancik and Peter Hedman and Ricardo Martin-Brualla and Pratul P. Srinivasan},
  journal={2021 IEEE/CVF International Conference on Computer Vision (ICCV)},
  year={2021},
  pages={5835-5844}
}
The rendering procedure used by neural radiance fields (NeRF) samples a scene with a single ray per pixel and may therefore produce renderings that are excessively blurred or aliased when training or testing images observe scene content at different resolutions. The straightforward solution of supersampling by rendering with multiple rays per pixel is impractical for NeRF, because rendering each ray requires querying a multilayer perceptron hundreds of times. Our solution, which we call "mip… 
Mip-NeRF RGB-D: Depth Assisted Fast Neural Radiance Fields
TLDR
The recently proposed Mip-NeRF approach, which uses conical frustums instead of rays for volume rendering, allows to address major limitations of NeRF-based approaches including improving the accuracy of geometry, reduced artifacts, faster training time, and shortened prediction time.
TAVA: Template-free Animatable Volumetric Actors
Coordinate-based volumetric representations have the potential to generate photo-realistic virtual avatars from images. However, virtual avatars also need to be controllable even to a novel pose that
DANBO: Disentangled Articulated Neural Body Representations via Graph Neural Networks
TLDR
This work introduces a three-stage method that in-duces two inductive biases to better disentangled pose-dependent deformation and strikes a better trade-off between model capacity, expressiveness, and robustness than competing methods.
DDNeRF: Depth Distribution Neural Radiance Fields
TLDR
This work presents depth distribution neural radiance field (DDNeRF), a new method that significantly increases sampling efficiency along rays during training while achieving superior results for a given sampling budget by learning a more accurate representation of the density distribution along rays.
NeRFocus: Neural Radiance Field for 3D Synthetic Defocus
TLDR
A novel thinlens-imaging-based NeRF framework that can directly render various 3D defocus effects, dubbed NeRFocus is proposed and an efficient probabilistic training (p-training) strategy is designed to simplify the training process vastly.
ERF: Explicit Radiance Field Reconstruction From Scratch
TLDR
A novel explicit dense 3D reconstruction approach that processes a set of images of a scene with sensor poses and calibrations and estimates a photo-real digital model that can reconstruct models of high quality that are comparable to state-of-the-art implicit methods.
Enhancing Multi-Scale Implicit Learning in Image Super-Resolution with Integrated Positional Encoding
TLDR
This work proposes integrated positional encoding (IPE), extending traditional positional encoding by aggregating frequency information over the pixel area by applying IPE to the state-of-the-art arbitrary-scale image super-resolution method: local implicit image function (LIIF), presenting IPE-LIIf.
CityNeRF: Building NeRF at City Scale
TLDR
This work makes the first attempt to bring NeRF to city-scale, with views ranging from satellite-level that captures the overview of a city, to ground-level imagery showing complex details of an architecture.
Ref-NeRF: Structured View-Dependent Appearance for Neural Radiance Fields
TLDR
Ref-NeRF is introduced, which replaces NeRF’s parameterization of view-dependent outgoing radiance with a representation of reflected radiance and structures this function using a collection of spatially-varying scene properties and shows that together with a regularizer on normal vectors, this model significantly improves the realism and accuracy of specular reflections.
RegNeRF: Regularizing Neural Radiance Fields for View Synthesis from Sparse Inputs
TLDR
This work observes that the majority of artifacts in sparse input scenarios are caused by errors in the estimated scene geometry, and by divergent behavior at the start of training, and addresses this by regularizing the geometry and appearance of patches rendered from unobserved viewpoints, and annealing the ray sampling space during training.
...
...

References

SHOWING 1-10 OF 59 REFERENCES
Scene Representation Networks: Continuous 3D-Structure-Aware Neural Scene Representations
TLDR
The proposed Scene Representation Networks (SRNs), a continuous, 3D-structure-aware scene representation that encodes both geometry and appearance, are demonstrated by evaluating them for novel view synthesis, few-shot reconstruction, joint shape and appearance interpolation, and unsupervised discovery of a non-rigid face model.
JaxNeRF: an efficient JAX implementation of NeRF, 2020. http://github.com/google-research/ google-research/tree/master/jaxnerf
  • 2020
NeuMIP: Multi-Resolution Neural Materials
TLDR
The neural representation is trained using hundreds of reflectance queries per texel, across multiple resolutions, and is independent of the underlying input, which could be based on displaced geometry, fiber geometry, measured data, or others.
Neural Geometric Level of Detail: Real-time Rendering with Implicit 3D Shapes
TLDR
An efficient neural representation is introduced that enables real-time rendering of high-fidelity neural SDFs, while achieving state-of-the-art geometry reconstruction quality, and is 2–3 orders of magnitude more efficient in terms of rendering speed.
NeRD: Neural Reflectance Decomposition from Image Collections
TLDR
A neural reflectance decomposition (NeRD) technique that uses physically-based rendering to decompose the scene into spatially varying BRDF material properties enabling fast real-time rendering with novel illuminations.
NeRV: Neural Reflectance and Visibility Fields for Relighting and View Synthesis
We present a method that takes as input a set of images of a scene illuminated by unconstrained known lighting, and produces as output a 3D representation that can be rendered from novel viewpoints
Dynamic Neural Radiance Fields for Monocular 4D Facial Avatar Reconstruction
TLDR
This work combines a scene representation network with a low-dimensional morphable model which provides explicit control over pose and expressions and shows that this learned volumetric representation allows for photorealistic image generation that surpasses the quality of state-of-the-art video-based reenactment methods.
Learned Initializations for Optimizing Coordinate-Based Neural Representations
TLDR
Standard meta-learning algorithms are proposed to be applied to learn the initial weight parameters for fully-connected coordinate-based neural representations based on the underlying class of signals being represented, enabling faster convergence during optimization and resulting in better generalization when only partial observations of a given signal are available.
pi-GAN: Periodic Implicit Generative Adversarial Networks for 3D-Aware Image Synthesis
TLDR
This work proposes a novel generative model, named Periodic Implicit Generative Adversarial Networks ($\pi$-GAN or pi-GAN), for high-quality 3D-aware image synthesis that leverages neural representations with periodic activation functions and volumetric rendering to represent scenes as view-consistent 3D representations with fine detail.
Neural Scene Flow Fields for Space-Time View Synthesis of Dynamic Scenes
TLDR
A method to perform novel view and time synthesis of dynamic scenes, requiring only a monocular video with known camera poses as input, is presented, and a new representation that models the dynamic scene as a time-variant continuous function of appearance, geometry, and 3D scene motion is introduced.
...
...