Instant neural graphics primitives with a multiresolution hash encoding

@article{Mller2022InstantNG,
  title={Instant neural graphics primitives with a multiresolution hash encoding},
  author={Thomas M{\"u}ller and Alex Evans and Christoph Schied and Alexander Keller},
  journal={ACM Transactions on Graphics (TOG)},
  year={2022},
  volume={41},
  pages={1 - 15}
}
Neural graphics primitives, parameterized by fully connected neural networks, can be costly to train and evaluate. We reduce this cost with a versatile new input encoding that permits the use of a smaller network without sacrificing quality, thus significantly reducing the number of floating point and memory access operations: a small neural network is augmented by a multiresolution hash table of trainable feature vectors whose values are optimized through stochastic gradient descent. The… 
MonoSDF: Exploring Monocular Geometric Cues for Neural Implicit Surface Reconstruction
TLDR
It is demonstrated that depth and normal cues, predicted by general-purpose monocular estimators, significantly improve reconstruction quality and optimization time, and geometric monocular priors improve performance both for small-scale single-object as well as large-scale multi-object scenes, independent of the choice of representation.
Variable Bitrate Neural Fields
TLDR
A dictionary method for compressing feature grids, reducing their memory consumption by up to 100 × and permitting a multiresolution representation which can be useful for out-of-core streaming is presented.
RTMV: A Ray-Traced Multi-View Synthetic Dataset for Novel View Synthesis
TLDR
Sparse Voxel Light Field (SVLF), an efficient voxel-based light-based approach for novel view synthesis that achieves comparable performance to NeRF on synthetic data, while being an order of magnitude faster to train and two orders of magnitude quicker to render.
VolTeMorph: Realtime, Controllable and Generalisable Animation of Volumetric Representations
Fig. 1. We propose a method to deform static multi-view volumetric models, such as NeRF, in real-time using blendshape or physics-driven animation. This allows us to create dynamic scenes from static
AdaNeRF: Adaptive Sampling for Real-time Rendering of Neural Radiance Fields
TLDR
A novel dual-network architecture that takes an orthogonal direction by learning how to best reduce the number of required sample points is proposed that outperforms concurrent compact neural representations in terms of quality and frame rate and performs on par with highly efficient hybrid representations.
Improved Direct Voxel Grid Optimization for Radiance Fields Reconstruction
TLDR
The DVGO frame-work (called DVGOv2), which is based on Pytorch and uses the simplest dense grid representation, is improved and extended to support Forward-facing and Unbounded Inward-facing captur-ing.
Implicit Object Mapping With Noisy Data
TLDR
This paper uses the outputs of an object-based SLAM system to bound objects in the scene with coarse primitives and – in concert with instance masks – identify obstructions in the training images to show that object- based NeRFs are robust to pose variations but sensitive to the quality of the instance masks.
PeRFception: Perception using Radiance Fields
TLDR
This work creates the first large-scale implicit representation datasets for perception tasks, called the PeRFception dataset, which consists of two parts that incorporate both object-centric and scene-centric scans for classification and segmentation.
PIXEL: Physics-Informed Cell Representations for Fast and Accurate PDE Solvers
TLDR
This paper proposes a new kind of data-driven PDEs solver, physics-informed cell representations (PIXEL), elegantly combining classical numerical methods and learning-based approaches and shows that PIXEL achieves fast convergence speed and high accuracy.
Instant Neural Representation for Interactive Volume Rendering
TLDR
This paper demonstrates that by simultaneously leveraging modern GPU tensor cores, a native CUDA neural network framework, and online training, this method can achieve high-performance and high-fidelity interactive ray tracing using volumetric neural representations.
...
...

References

SHOWING 1-10 OF 61 REFERENCES
ACORN: Adaptive Coordinate Networks for Neural Representation
  • ACM Trans. Graph. (SIGGRAPH) (2021).
  • 2021
Real-time neural radiance caching for path tracing
TLDR
This work presents a real-time neural radiance caching method for path-traced global illumination, and employs self-training to provide low-noise training targets and simulate infinite-bouncing transport by merely iterating few-bounce training updates.
Neural Sparse Voxel Fields
TLDR
This work introduces Neural Sparse Voxel Fields (NSVF), a new neural scene representation for fast and high-quality free-viewpoint rendering that is over 10 times faster than the state-of-the-art (namely, NeRF) at inference time while achieving higher quality results.
Fourier Features Let Networks Learn High Frequency Functions in Low Dimensional Domains
TLDR
An approach for selecting problem-specific Fourier features that greatly improves the performance of MLPs for low-dimensional regression tasks relevant to the computer vision and graphics communities is suggested.
Implicit Neural Representations with Periodic Activation Functions
TLDR
This work proposes to leverage periodic activation functions for implicit neural representations and demonstrates that these networks, dubbed sinusoidal representation networks or Sirens, are ideally suited for representing complex natural signals and their derivatives.
Optimized Spatial Hashing for Collision Detection of Deformable Objects
TLDR
The presented algorithm is integrated in a physically–based environment, which can be used in game engines and surgical simulators, and employs a hash function for compressing a potentially infinite regular spatial grid.
Plenoxels: Radiance Fields without Neural Networks
TLDR
This work introduces Plenoxels (plenoptic voxels), a system for photorealistic view synthesis that can be optimized from calibrated images via gradient methods and regularization without any neural components.
Direct Voxel Grid Optimization: Super-fast Convergence for Radiance Fields Reconstruction
TLDR
This work presents a super-fast convergence approach to reconstructing the per-scene radiance from a set of images that capture the scene with known poses, and introduces the post-activation interpolation on voxel density, which is capable of producing sharp surfaces in lower grid resolution.
PlenOctrees for Real-time Rendering of Neural Radiance Fields
TLDR
It is shown that it is possible to train NeRFs to predict a spherical harmonic representation of radiance, removing the viewing direction as an input to the neural network, and PlenOctrees can be directly optimized to further minimize the reconstruction loss, which leads to equal or better quality compared to competing methods.
Mip-NeRF: A Multiscale Representation for Anti-Aliasing Neural Radiance Fields
TLDR
By efficiently rendering anti-aliased conical frustums instead of rays, mip-NeRF reduces objectionable aliasing artifacts and significantly improves NeRF’s ability to represent fine details, while also being 7% faster than NeRF and half the size.
...
...