Mixture of volumetric primitives for efficient neural rendering

@article{Lombardi2021MixtureOV,
  title={Mixture of volumetric primitives for efficient neural rendering},
  author={Stephen Lombardi and Tomas Simon and Gabriel Schwartz and Michael Zollhoefer and Yaser Sheikh and Jason M. Saragih},
  journal={ACM Transactions on Graphics (TOG)},
  year={2021},
  volume={40},
  pages={1 - 13}
}
Real-time rendering and animation of humans is a core function in games, movies, and telepresence applications. Existing methods have a number of drawbacks we aim to address with our work. Triangle meshes have difficulty modeling thin structures like hair, volumetric representations like Neural Volumes are too low-resolution given a reasonable memory budget, and high-resolution implicit representations like Neural Radiance Fields are too slow for use in real-time applications. We present… 

Advances in Neural Rendering

TLDR
This state‐of‐the‐art report on advances in neural rendering focuses on methods that combine classical rendering principles with learned 3D scene representations, often now referred to as neural scene representations.

Artemis: Articulated Neural Pets with Appearance and Motion synthesis

TLDR
The core of ARTEMIS is a neural-generated (NGI) animal engine, which adopts an efficient octree-based representation for animal animation and fur rendering, and introduces an effective opti- mization scheme to reconstruct the skeletal motion of real animals captured by a multi-view RGB and Vicon camera array.

Supplementary Material for: Fast and Explicit Neural View Synthesis

TLDR
This work is planning to tackle the problem of training a high fidelity as well as scene/categoryagnostic representation by increasing scene representation capacity for complex regions of space while minimizing the computational resources for empty regions.

HVH: Learning a Hybrid Neural Volumetric Representation for Dynamic Hair Performance Capture

TLDR
This paper uses a novel, volumetric hair representation that is com-posed of thousands of primitives to have a reliable control signal, and presents a novel way of tracking hair on the strand level to achieve state-of-the-art results.

UV Volumes for Real-time Rendering of Editable Free-view Human Performance

TLDR
This model can render 960 × 540 images in 30FPS on average with comparable photo-realism to state-of-the-art methods, and the use of NTS enables interesting applications, e.g., retexturing.

Fourier PlenOctrees for Dynamic Radiance Field Rendering in Real-time

TLDR
This paper presents a novel Fourier PlenOctree (FPO) technique to tackle efficient neural modeling and real-time rendering of dynamic scenes captured under the free-view video (FVV) setting and shows that the resulting FPO enables compact memory overload to handle dynamic objects and supports efficient fine-tuning.

Neural Deformable Voxel Grid for Fast Optimization of Dynamic View Synthesis

TLDR
Experimental results show that the proposed fast deformable radiance field method achieves comparable performance to D- NeRF using only 20 minutes for training, which is more than 70 × faster than D-NeRF, clearly demonstrating the efficiency of the proposed method.

Free-Viewpoint RGB-D Human Performance Capture and Rendering

TLDR
This work introduces a novel view synthesis framework that generates realistic renders from unseen views of any human captured from a single-view and sparse RGB-D sensor, similar to a low-cost depth camera, and without actor-specific models.

DONeRF: Towards Real‐Time Rendering of Compact Neural Radiance Fields using Depth Oracle Networks

TLDR
DONeRF, a compact dual network design with a depth oracle network as its first step and a locally sampled shading network for ray accumulation, is presented, which reduces the inference costs by up to 48× compared to NeRF when conditioning on available ground truth depth information.

AdaNeRF: Adaptive Sampling for Real-time Rendering of Neural Radiance Fields

TLDR
A novel dual-network architecture that takes an orthogonal direction by learning how to best reduce the number of required sample points is proposed that outperforms concurrent compact neural representations in terms of quality and frame rate and performs on par with highly efficient hybrid representations.
...

References

SHOWING 1-10 OF 99 REFERENCES

State of the Art on Neural Rendering

TLDR
This state‐of‐the‐art report summarizes the recent trends and applications of neural rendering and focuses on approaches that combine classic computer graphics techniques with deep generative models to obtain controllable and photorealistic outputs.

NeRF: Representing Scenes as Neural Radiance Fields for View Synthesis

TLDR
This work describes how to effectively optimize neural radiance fields to render photorealistic novel views of scenes with complicated geometry and appearance, and demonstrates results that outperform prior work on neural rendering and view synthesis.

Learning to Predict 3D Objects with an Interpolation-based Differentiable Renderer

TLDR
A differentiable rendering framework which allows gradients to be analytically computed for all pixels in an image and to view foreground rasterization as a weighted interpolation of local properties and background rasterized as a distance-based aggregation of global geometry.

Deep appearance models for face rendering

TLDR
A data-driven rendering pipeline that learns a joint representation of facial geometry and appearance from a multiview capture setup and a novel unsupervised technique for mapping images to facial states results in a system that is naturally suited to real-time interactive settings such as Virtual Reality (VR).

Neural 3D Mesh Renderer

TLDR
This work proposes an approximate gradient for rasterization that enables the integration of rendering into neural networks and performs gradient-based 3D mesh editing operations, such as 2D-to-3D style transfer and 3D DeepDream, with 2D supervision for the first time.

Soft Rasterizer: A Differentiable Renderer for Image-Based 3D Reasoning

TLDR
This work proposes a truly differentiable rendering framework that is able to directly render colorized mesh using differentiable functions and back-propagate efficient supervision signals to mesh vertices and their attributes from various forms of image representations, including silhouette, shading and color images.

Modular primitives for high-performance differentiable rendering

TLDR
A modular differentiable renderer design that yields performance superior to previous methods by leveraging existing, highly optimized hardware graphics pipelines, and allows custom, high-performance graphics pipelines to be built directly within automatic differentiation frameworks such as PyTorch or TensorFlow.

Neural Point-Based Graphics

We present a new point-based approach for modeling the appearance of real scenes. The approach uses a raw point cloud as the geometric representation of a scene, and augments each point with a

Neural Radiance Flow for 4D View Synthesis and Video Processing

TLDR
This work uses a neural implicit representation that learns to capture the 3D occupancy, radiance, and dynamics of the scene, and demonstrates that the learned representation can serve as an implicit scene prior, enabling video processing tasks such as image super-resolution and de-noising without any additional supervision.

TRANSPR: Transparency Ray-Accumulating Neural 3D Scene Point Renderer

TLDR
A neural point-based graphics method that can model semi-transparent scene parts using point clouds to model proxy geometry, and augments each point with a neural descriptor, and a learnable transparency value is introduced in this approach.
...