State of the Art on Neural Rendering

@article{Tewari2020StateOT,
  title={State of the Art on Neural Rendering},
  author={Ayush Tewari and Ohad Fried and Justus Thies and Vincent Sitzmann and Stephen Lombardi and Kalyan Sunkavalli and Ricardo Martin-Brualla and Tomas Simon and Jason M. Saragih and Matthias Nie{\ss}ner and Rohit Pandey and S. Fanello and Gordon Wetzstein and Jun-Yan Zhu and Christian Theobalt and Maneesh Agrawala and Eli Shechtman and Dan B. Goldman and Michael Zollhofer},
  journal={Computer Graphics Forum},
  year={2020},
  volume={39}
}
Efficient rendering of photo‐realistic virtual worlds is a long standing effort of computer graphics. Modern graphics techniques have succeeded in synthesizing photo‐realistic images from hand‐crafted scene representations. However, the automatic generation of shape, materials, lighting, and other aspects of scenes remains a challenging problem that, if solved, would make photo‐realistic computer graphics more widely accessible. Concurrently, progress in computer vision and machine learning… 
Advances in Neural Rendering
TLDR
This state‐of‐the‐art report on advances in neural rendering focuses on methods that combine classical rendering principles with learned 3D scene representations, often now referred to as neural scene representations.
Mixture of volumetric primitives for efficient neural rendering
TLDR
Mixture of Volumetric Primitives (MVP), a representation for rendering dynamic 3D content that combines the completeness of volumetric representations with the efficiency of primitive-based rendering, is presented.
Light Field Networks: Neural Scene Representations with Single-Evaluation Rendering
TLDR
A novel neural scene representation, Light Field Networks or LFNs, which represent both geometry and appearance of the underlying 3D scene in a 360-degree, four-dimensional light field parameterized via a neural network, which results in dramatic reductions in time and memory complexity, and enables real-time rendering.
Neural Lumigraph Rendering
TLDR
This work adopts high-capacity neural scene representations with periodic activations for jointly optimizing an implicit surface and a radiance field of a scene supervised exclusively with posed 2D images, enabling real-time rendering rates, while achieving unprecedented image quality compared to other surface methods.
Neural Precomputed Radiance Transfer
TLDR
Four different neural network architectures are introduced, and it is shown that those based on knowledge of light transport models and PRT‐inspired principles improve the quality of global illumination predictions at equal training time and network size, without the need for high‐end ray‐tracing hardware.
Neural Sparse Voxel Fields
TLDR
This work introduces Neural Sparse Voxel Fields (NSVF), a new neural scene representation for fast and high-quality free-viewpoint rendering that is over 10 times faster than the state-of-the-art (namely, NeRF) at inference time while achieving higher quality results.
Neural Adaptive SCEne Tracing
TLDR
This work presents Neural Adaptive Scene Tracing (NAScenT), the first neural rendering method based on directly training a hybrid explicitimplicit neural representation and outperforms existing neural rendering approaches in terms of both quality and training time.
Deep Neural Models for Illumination Estimation and Relighting: A Survey
TLDR
This contribution aims to bring together in a coherent manner current advances in this conjunction, presented in three categories: scene illumination estimation, relighting with reflectance‐aware scene‐specific representations and finally relighting as image‐to‐image transformations.
Point‐Based Neural Rendering with Per‐View Optimization
TLDR
A general approach is introduced that is initialized with MVS, but allows further optimization of scene properties in the space of input views, including depth and reprojected features, resulting in improved novel‐view synthesis.
Real-Time Neural Character Rendering with Pose-Guided Multiplane Images
TLDR
This work proposes pose-guided multiplane image (MPI) synthesis which can render an animatable character in real scenes with photorealistic quality and demonstrates advanta-geous novel-view synthesis quality over the state-of-the-art approaches for characters with challenging motions.
...
...

References

SHOWING 1-10 OF 247 REFERENCES
Deferred neural rendering
TLDR
This work proposes Neural Textures, which are learned feature maps that are trained as part of the scene capture process that can be utilized to coherently re-render or manipulate existing video content in both static and dynamic environments at real-time rates.
Image-guided Neural Object Rendering
TLDR
This work presents a novel method for photo-realistic re-rendering of reconstructed objects that combines the benefits of image-based rendering and GAN-based image synthesis, and proposes EffectsNet, a deep neural network that predicts view-dependent effects.
LookinGood: Enhancing Performance Capture with Real-time Neural Re-Rendering
TLDR
The novel approach to augment such real-time performance capture systems with a deep architecture that takes a rendering from an arbitrary viewpoint, and jointly performs completion, super resolution, and denoising of the imagery in real- time is taken.
Single-image SVBRDF capture with a rendering-aware deep network
TLDR
This work tackles lightweight appearance capture by training a deep neural network to automatically extract and make sense of visual cues from a single image, and designs a network that combines an encoder-decoder convolutional track for local feature extraction with a fully-connected track for global feature extraction and propagation.
Inverse Rendering for Computer Graphics
Creating realistic images has been a major focus in the study of computer graphics for much of its history. This effort has led to mathematical models and algorithms that can compute predictive, or
Neural volumes
TLDR
This work presents a learning-based approach to representing dynamic objects inspired by the integral projection model used in tomographic imaging, and learns a latent representation of a dynamic scene that enables us to produce novel content sequences not seen during training.
Deep Shading: Convolutional Neural Networks for Screen Space Shading
TLDR
The diagonal problem: synthesizing appearance from given per‐pixel attributes using a CNN is considered and the resulting Deep Shading renders screen space effects at competitive quality and speed while not being programmed by human experts but learned from example images.
Scribbler: Controlling Deep Image Synthesis with Sketch and Color
TLDR
A deep adversarial image synthesis architecture that is conditioned on sketched boundaries and sparse color strokes to generate realistic cars, bedrooms, or faces is proposed and demonstrates a sketch based image synthesis system which allows users to scribble over the sketch to indicate preferred color for objects.
Deep appearance models for face rendering
TLDR
A data-driven rendering pipeline that learns a joint representation of facial geometry and appearance from a multiview capture setup and a novel unsupervised technique for mapping images to facial states results in a system that is naturally suited to real-time interactive settings such as Virtual Reality (VR).
Neural Rendering and Reenactment of Human Actor Videos
TLDR
The proposed method for generating video-realistic animations of real humans under user control relies on a video sequence in conjunction with a (medium-quality) controllable 3D template model of the person to generate a synthetically rendered version of the video.
...
...