• Corpus ID: 244488448

Mip-NeRF 360: Unbounded Anti-Aliased Neural Radiance Fields

  title={Mip-NeRF 360: Unbounded Anti-Aliased Neural Radiance Fields},
  author={Jonathan T. Barron and Ben Mildenhall and Dor Verbin and Pratul P. Srinivasan and Peter Hedman},
Though neural radiance fields (NeRF) have demonstrated impressive view synthesis results on objects and small bounded regions of space, they struggle on “unbounded” scenes, where the camera may point in any direction and content may exist at any distance. In this setting, existing NeRF-like models often produce blurry or low-resolution renderings (due to the unbalanced detail and scale of nearby and distant objects), are slow to train, and may exhibit artifacts due to the inherent ambiguity of… 


Baking Neural Radiance Fields for Real-Time View Synthesis
This work introduces a method to train a NeRF, then precompute and store it as a novel representation called a Sparse Neural Radiance Grid (SNeRG) that enables realtime rendering on commodity hardware.
PlenOctrees for Real-time Rendering of Neural Radiance Fields
It is shown that it is possible to train NeRFs to predict a spherical harmonic representation of radiance, removing the viewing direction as an input to the neural network, and PlenOctrees can be directly optimized to further minimize the reconstruction loss, which leads to equal or better quality compared to competing methods.
KiloNeRF: Speeding up Neural Radiance Fields with Thousands of Tiny MLPs
It is demonstrated that real-time rendering is possible by utilizing thousands of tiny MLPs instead of one single large MLP, and using teacher-student distillation for training, this speed-up can be achieved without sacrificing visual quality.
UNISURF: Unifying Neural Implicit Surfaces and Radiance Fields for Multi-View Reconstruction
This work shows that implicit surface models and radiance fields can be formulated in a unified way, enabling both surface and volume rendering using the same model, and outperforms NeRF in terms of reconstruction quality while performing on par with IDR without requiring masks.
MatryODShka: Real-time 6DoF Video View Synthesis using Multi-Sphere Images
This work introduces a method to convert stereo 360° (omnidirectional stereo) imagery into a layered, multi-sphere image representation for six degree-of-freedom (6DoF) rendering, which can be rendered with correct 6DoF disparity and motion parallax in VR.
TermiNeRF: Ray Termination Prediction for Efficient Neural Rendering
This paper presents a method that is able to render, train and fine-tune a volumetrically-rendered neural field model an order of magnitude faster than standard approaches and works with general volumes and can be trained end-to-end.
Deep Multi Depth Panoramas for View Synthesis
A novel scene representation - Multi Depth Panorama (MDP) - that consists of multiple RGBD panoramas that represent both scene geometry and appearance that are more compact than previous 3D scene representations and enable high-quality, efficient new view rendering.
Point‐Based Neural Rendering with Per‐View Optimization
A general approach is introduced that is initialized with MVS, but allows further optimization of scene properties in the space of input views, including depth and reprojected features, resulting in improved novel‐view synthesis.
NeRF in detail: Learning to sample for view synthesis
Neural radiance fields methods have demonstrated impressive novel view synthesis performance by querying a neural network at points sampled along the ray to obtain the density and colour of the sampled points, and integrating this information using the rendering equation.
Deep blending for free-viewpoint image-based rendering
This work presents a new deep learning approach to blending for IBR, in which held-out real image data is used to learn blending weights to combine input photo contributions, and designs the network architecture and the training loss to provide high quality novel view synthesis, while reducing temporal flickering artifacts.