• Corpus ID: 214612009

Rig-space Neural Rendering

@article{Borer2020RigspaceNR,
  title={Rig-space Neural Rendering},
  author={Dominik Borer and Yuhang Lu and Laura Wuelfroth and Jakob Buhmann and Martine Guay},
  journal={ArXiv},
  year={2020},
  volume={abs/2003.09820}
}
Movie productions use high resolution 3d characters with complex proprietary rigs to create the highest quality images possible for large displays. Unfortunately, these 3d assets are typically not compatible with real-time graphics engines used for games, mixed reality and real-time pre-visualization. Consequently, the 3d characters need to be re-modeled and re-rigged for these new applications, requiring weeks of work and artistic approval. Our solution to this problem is to learn a compact… 

Figures from this paper

References

SHOWING 1-10 OF 24 REFERENCES

Deferred Neural Rendering: Image Synthesis using Neural Textures

This work proposes Neural Textures, which are learned feature maps that are trained as part of the scene capture process that can be utilized to coherently re-render or manipulate existing video content in both static and dynamic environments at real-time rates.

LookinGood: Enhancing Performance Capture with Real-time Neural Re-Rendering

The novel approach to augment such real-time performance capture systems with a deep architecture that takes a rendering from an arbitrary viewpoint, and jointly performs completion, super resolution, and denoising of the imagery in real- time is taken.

Neural Rendering and Reenactment of Human Actor Videos

The proposed method for generating video-realistic animations of real humans under user control relies on a video sequence in conjunction with a (medium-quality) controllable 3D template model of the person to generate a synthetically rendered version of the video.

Light field rendering

This paper describes a sampled representation for light fields that allows for both efficient creation and display of inward and outward looking views, and describes a compression system that is able to compress the light fields generated by more than a factor of 100:1 with very little loss of fidelity.

Unstructured lumigraph rendering

We describe an image based rendering approach that generalizes many current image based rendering algorithms, including light field rendering and view-dependent texture mapping. In particular, it

Free-viewpoint video of human actors

A system that uses multi-view synchronized video footage of an actor's performance to estimate motion parameters and to interactively re-render the actor's appearance from any viewpoint, yielding a highly naturalistic impression of the actor.

Optimal Representation of Multiple View Video

Spatio-temporal optimisation of the multi-view resampling is introduced to extract a coherent multi-layer texture map video and results in a compact representation with minimal loss of information allowing high-quality free-viewpoint rendering.

Neural scene representation and rendering

The Generative Query Network (GQN) is introduced, a framework within which machines learn to represent scenes using only their own sensors, demonstrating representation learning without human labels or domain knowledge.

Video-based characters: creating new human performances from a multi-view video database

A warping-based texture synthesis approach that uses the retrieved most-similar database frames to synthesize spatio-temporally coherent target video frames to create realistic videos of people, even if the target motions and camera views are different from the database content.

Deep Video‐Based Performance Cloning

We present a new video‐based performance cloning technique. After training a deep generative network using a reference video capturing the appearance and dynamics of a target actor, we are able to