Advances in Neural Rendering

@article{Tewari2021AdvancesIN,
  title={Advances in Neural Rendering},
  author={Anju Tewari and Otto Fried and Justus Thies and Vincent Sitzmann and S. Lombardi and Z Xu and Tanaba Simon and Matthias Nie{\ss}ner and Edgar Tretschk and L. Liu and Ben Mildenhall and Pranatharthi Srinivasan and R. Pandey and Sergio Orts-Escolano and S. Fanello and M. Guang Guo and Gordon Wetzstein and J-y Zhu and Christian Theobalt and Manju Agrawala and Donald B. Goldman and Michael Zollh{\"o}fer},
  journal={Computer Graphics Forum},
  year={2021},
  volume={41}
}
Synthesizing photo‐realistic images and videos is at the heart of computer graphics and has been the focus of decades of research. Traditionally, synthetic images of a scene are generated using rendering algorithms such as rasterization or ray tracing, which take specifically defined representations of geometry and material properties as input. Collectively, these inputs define the actual scene and what is rendered, and are referred to as the scene representation (where a scene consists of one… 

DRaCoN - Differentiable Rasterization Conditioned Neural Radiance Fields for Articulated Avatars

DRaCoN is presented, a framework for learning full-body volumetric avatars which exploits the advantages of both the 2D and 3D neural rendering techniques.

3D Neural Field Generation using Triplane Diffusion

This work presents an efficient diffusion-based model for 3D-aware generation of neural fields, and demonstrates state-of-the-art results on 3D generation on several object classes from ShapeNet.

RUST: Latent Neural Scene Representations from Unposed Imagery

This work proposes RUST (Really Unposed Scene representation Transformer), a pose-free approach to novel view synthesis trained on RGB images alone that achieves similar quality as methods which have access to perfect camera pose, thereby unlocking the potential for large-scale training of amortized neural scene representations.

Efficient 3D Reconstruction, Streaming and Visualization of Static and Dynamic Scene Parts for Multi-client Live-telepresence in Large-scale Environments

This paper presents a system which is built upon a novel hybrid volumetric scene representation in terms of the combination of a voxel-based scene representation for the static contents and a point-cloud-based representation for dynamic scene parts, and is able to achieve VR-based live-telepresence at interactive rates.

ScanNeRF: a Scalable Benchmark for Neural Radiance Fields

This paper proposes the first-ever real benchmark thought for evaluating Neural Radiance Fields (NeRFs) and, in general, Neural Rendering (NR) frameworks and evaluates three cutting-edge NeRF variants on it to highlight their strengths and weaknesses.

Peekaboo: Text to Image Diffusion Models are Zero-Shot Segmentors

This work explores how off-the-shelf diffusion models, trained with no exposure to such localization information, are capable of grounding various semantic phrases with no segmentation-speciation re-training, and presents a zero-shot, open-vocabulary, unsupervised (no localization informa-tion), semantic grounding technique leveraging diffusion-based generative models with no re- training.

SPIn-NeRF: Multiview Segmentation and Perceptual Inpainting with Neural Radiance Fields

A novel 3D inpainting method that addresses the removal of unwanted objects from a 3D scene, such that the replaced region is visually plausible and consistent with its context, and demonstrates the superiority of the approach on multiview segmentation, comparing to NeRF-based methods and 2D segmentation approaches.

Next3D: Generative Neural Texture Rasterization for 3D-Aware Head Avatars

A novel 3D GAN framework for unsupervised learning of generative, high-quality and 3D-consistent facial avatars from unstructured 2D images is proposed and a 3D representation called Generative Texture-Rasterized Tri-planes is proposed to achieve both deformation accuracy and topological topological protection.

CADSim: Robust and Scalable in-the-wild 3D Reconstruction for Controllable Sensor Simulation

CADSim is presented, which combines part-aware object-class priors via a small set of CAD models with differentiable rendering to automatically reconstruct vehicle geometry, including articulated wheels, with high-quality appearance, and recovers more accurate shapes from sparse data compared to existing approaches.

DINER: Disorder-Invariant Implicit Neural Representation

It is proposed that such a frequency-related problem could be largely solved by re-arranging the coordinates of the input signal, for which the disorder-invariant implicit neural representation (DINER) is proposed by augmenting a hash-table to a traditional INR backbone.
...

References

SHOWING 1-10 OF 276 REFERENCES

Instant neural graphics primitives with a multiresolution hash encoding

A versatile new input encoding that permits the use of a smaller network without sacrificing quality, thus significantly reducing the number of floating point and memory access operations is introduced, enabling training of high-quality neural graphics primitives in a matter of seconds, and rendering in tens of milliseconds at a resolution of 1920×1080.

HumanNeRF: Free-viewpoint Rendering of Moving People from Monocular Video

A free-viewpoint rendering method that works on a given monocular video of a human performing complex body motions, e.g. a video from YouTube, that optimizes for a volumetric representation of the person in a canonical T-pose, in concert with a motion field that maps the estimated canonical representation to every frame of the video via backward warps.

StyleSDF: High-Resolution 3D-Consistent Image and Geometry Generation

This work introduces a high resolution, 3D-consistent image and shape generation technique which it calls StyleSDF, merging a SDF-based 3D representation with a style-based 2D generator, and defines detailed 3D surfaces, leading to consistent volume rendering.

Light Field Neural Rendering

This work introduces a two-stage transformer-based model that first aggregates features along epipolar lines, then aggregates Features along reference views to produce the color of a target ray in a four-dimensional light field.

GRAM: Generative Radiance Manifolds for 3D-Aware Image Generation

This work proposes a novel approach that regulates point sampling and radiance field learning on 2D manifolds, embodied as a set of learned implicit surfaces in the 3D volume that can produce high quality images with realistic fine details and strong visual 3D consistency.

Efficient Geometry-aware 3D Generative Adversarial Networks

This work introduces an expressive hybrid explicit implicit network architecture that synthesizes not only high-resolution multi-view-consistent images in real time but also produces high-quality 3D geometry by decoupling feature generation and neural rendering.

HeadNeRF: A Realtime NeRF-based Parametric Head Model

In this paper, we propose HeadNeRF, a novel NeRF-based parametric head model that integrates the neural radiance field to the parametric representation of the human head. It can render high fidelity

Neural actor

Experiments demonstrate that the proposed Neural Actor achieves better quality than the state-of-the-arts on playback as well as novel pose synthesis, and can even generalize well to new poses that starkly differ from the training poses.

Neural Radiance Fields for Outdoor Scene Relighting

Comparisons against state of the art show that NeRF-OSR enables controllable lighting and viewpoint editing at higher quality and with realistic self-shadowing reproduction.

CLIP-NeRF: Text-and-Image Driven Manipulation of Neural Radiance Fields

This work introduces a disentangled conditional NeRF architecture that allows individual control over both shape and appearance and proposes an inverse optimization method that accurately projects an input image to the latent codes for manipulation to enable editing on real images.
...