Neural Geometric Level of Detail: Real-time Rendering with Implicit 3D Shapes

@article{Takikawa2021NeuralGL,
  title={Neural Geometric Level of Detail: Real-time Rendering with Implicit 3D Shapes},
  author={Towaki Takikawa and Joey Litalien and K. Yin and Karsten Kreis and Charles T. Loop and Derek Nowrouzezahrai and Alec Jacobson and Morgan McGuire and Sanja Fidler},
  journal={2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
  year={2021},
  pages={11353-11362}
}
Neural signed distance functions (SDFs) are emerging as an effective representation for 3D shapes. State-of-the-art methods typically encode the SDF with a large, fixed-size neural network to approximate complex shapes with implicit surfaces. Rendering with these large networks is, however, computationally expensive since it requires many forward passes through the network for every pixel, making these representations impractical for real-time graphics. We introduce an efficient neural… 

HDSDF: Hybrid Directional and Signed Distance Functions for Fast Inverse Rendering

This paper proposes a novel hybrid 3D object representation based on a signed distance function (SDF) that is augment with a directionaldistance function (DDF) so that it can predict distances to the object surface from any point on a sphere enclosing the object.

Volume Rendering of Neural Implicit Surfaces

This work defines the volume density function as Laplace's cumulative distribution function (CDF) applied to a signed distance function (SDF) representation, which provides a useful inductive bias to the geometry learned in the neural volume rendering process and facilitates a bound on the opacity approximation error, leading to an accurate sampling of the viewing ray.

Representing 3D Shapes with Probabilistic Directed Distance Fields

This work aims to address both shortcomings with a novel shape representation that allows fast differentiable rendering within an implicit ar-chitecture, and applies its method to fitting single shapes, unpaired 3D-aware generative image modelling, and single-image 3D reconstruction tasks.

Deep Marching Tetrahedra: a Hybrid Representation for High-Resolution 3D Shape Synthesis

We introduce DMTET, a deep 3D conditional generative model that can synthesize high-resolution 3D shapes using simple user guides such as coarse voxels. It marries the merits of implicit and explicit

VoxGRAF: Fast 3D-Aware Image Synthesis with Sparse Voxel Grids

The results demonstrate that monolithic MLPs can indeed be replaced by 3D convolutions when combining sparse voxel grids with progressive growing, free space pruning and appropriate regularization.

Multi-View Reconstruction using Signed Ray Distance Functions (SRDF)

A new computational approach is investigated that builds on a novel shape representation that is volumetric, as in recent differentiable rendering approaches, but parameterized with depth maps to better materialize the shape surface.

Vox-Surf: Voxel-based Implicit Surface Representation

Vox-Surf is a voxel-based implicit surface representation that can learn delicate surface details and accurate color with less memory and faster rendering speed than other methods, and can be more practical in scene editing and AR applications.

Interactive Editing of Voxel-Based Signed Distance Fields

This paper presents an approach for interactive editing of signed distance functions, derived from RGB-D data in the form of regular voxel grids, that enables the manual refinement and enhancement of reconstructed 3D geometry.

Fast Neural Representations for Direct Volume Rendering

A novel design of scene representation networks using GPU tensor cores to integrate the reconstruction seamlessly into on-chip raytracing kernels are proposed, and the quality and performance of this network to alternative network- and non-network-based compression schemes are compared.

MIP-plicits: Level of Detail Factorization of Neural Implicits Sphere Tracing

We introduce MIP-plicits, a novel approach for rendering 3D and 4D Neural Implicits that divide the problem into macro and meso components. We rely on the iterative nature of the sphere tracing
...

References

SHOWING 1-10 OF 58 REFERENCES

SDFDiff: Differentiable Rendering of Signed Distance Fields for 3D Shape Optimization

It is demonstrated that the proposed SDFDiff, a novel approach for image-based shape optimization using differentiable rendering of 3D shapes represented by signed distance functions, can be integrated with deep learning models, which opens up options for learning approaches on 3D objects without 3D supervision.

DeepSDF: Learning Continuous Signed Distance Functions for Shape Representation

This work introduces DeepSDF, a learned continuous Signed Distance Function (SDF) representation of a class of shapes that enables high quality shape representation, interpolation and completion from partial and noisy 3D input data.

Learning Deformable Tetrahedral Meshes for 3D Reconstruction

The Deformable Tetrahedral Meshes (DefTet) approach is introduced, which is the first to showcase high-quality 3D tetrahedral meshes results using only a single image as input, and can produce high-fidelity reconstructions with a significantly smaller grid size than alternative volumetric approaches.

DIST: Rendering Deep Implicit Signed Distance Function With Differentiable Sphere Tracing

This work proposes a differentiable sphere tracing algorithm that can effectively reconstruct accurate 3D shapes from various inputs, such as sparse depth and multi-view images, through inverse optimization and shows excellent generalization capability and robustness against various noises.

Neural Sparse Voxel Fields

This work introduces Neural Sparse Voxel Fields (NSVF), a new neural scene representation for fast and high-quality free-viewpoint rendering that is over 10 times faster than the state-of-the-art (namely, NeRF) at inference time while achieving higher quality results.

Local Deep Implicit Functions for 3D Shape

Local Deep Implicit Functions (LDIF), a 3D shape representation that decomposes space into a structured set of learned implicit functions that provides higher surface reconstruction accuracy than the state-of-the-art (OccNet), while requiring fewer than 1\% of the network parameters.

Deep Local Shapes: Learning Local SDF Priors for Detailed 3D Reconstruction

This work introduces Deep Local Shapes (DeepLS), a deep shape representation that enables encoding and reconstruction of high-quality 3D shapes without prohibitive memory requirements, and demonstrates the effectiveness and generalization power of this representation.

Differentiable Volumetric Rendering: Learning Implicit 3D Representations Without 3D Supervision

This work proposes a differentiable rendering formulation for implicit shape and texture representations, showing that depth gradients can be derived analytically using the concept of implicit differentiation, and finds that this method can be used for multi-view 3D reconstruction, directly resulting in watertight meshes.

Real-time compression and streaming of 4D performances

We introduce a realtime compression architecture for 4D performance capture that is two orders of magnitude faster than current state-of-the-art techniques, yet achieves comparable visual quality and

Progressive meshes

The progressive mesh (PM) representation is introduced, a new scheme for storing and transmitting arbitrary triangle meshes that addresses several practical problems in graphics: smooth geomorphing of level-of-detail approximations, progressive transmission, mesh compression, and selective refinement.
...