• Corpus ID: 214612009

Rig-space Neural Rendering

  title={Rig-space Neural Rendering},
  author={Dominik Borer and Yuhang Lu and Laura Wuelfroth and Jakob Buhmann and Martine Guay},
Movie productions use high resolution 3d characters with complex proprietary rigs to create the highest quality images possible for large displays. Unfortunately, these 3d assets are typically not compatible with real-time graphics engines used for games, mixed reality and real-time pre-visualization. Consequently, the 3d characters need to be re-modeled and re-rigged for these new applications, requiring weeks of work and artistic approval. Our solution to this problem is to learn a compact… 

Figures from this paper



Deferred Neural Rendering: Image Synthesis using Neural Textures

This work proposes Neural Textures, which are learned feature maps that are trained as part of the scene capture process that can be utilized to coherently re-render or manipulate existing video content in both static and dynamic environments at real-time rates.

LookinGood: Enhancing Performance Capture with Real-time Neural Re-Rendering

The novel approach to augment such real-time performance capture systems with a deep architecture that takes a rendering from an arbitrary viewpoint, and jointly performs completion, super resolution, and denoising of the imagery in real- time is taken.

Neural Rendering and Reenactment of Human Actor Videos

The proposed method for generating video-realistic animations of real humans under user control relies on a video sequence in conjunction with a (medium-quality) controllable 3D template model of the person to generate a synthetically rendered version of the video.

4D video textures for interactive character appearance

4D Video Textures introduce a novel representation for rendering video‐realistic interactive character animation from a database of 4D actor performance captured in a multiple camera studio that achieves >90% reduction in size and halves the rendering cost.

Light field rendering

This paper describes a sampled representation for light fields that allows for both efficient creation and display of inward and outward looking views, and describes a compression system that is able to compress the light fields generated by more than a factor of 100:1 with very little loss of fidelity.

Unstructured lumigraph rendering

We describe an image based rendering approach that generalizes many current image based rendering algorithms, including light field rendering and view-dependent texture mapping. In particular, it

Perceptual Losses for Real-Time Style Transfer and Super-Resolution

This work considers image transformation problems, and proposes the use of perceptual loss functions for training feed-forward networks for image transformation tasks, and shows results on image style transfer, where aFeed-forward network is trained to solve the optimization problem proposed by Gatys et al. in real-time.

Free-viewpoint video of human actors

A system that uses multi-view synchronized video footage of an actor's performance to estimate motion parameters and to interactively re-render the actor's appearance from any viewpoint, yielding a highly naturalistic impression of the actor.

Optimal Representation of Multiple View Video

Spatio-temporal optimisation of the multi-view resampling is introduced to extract a coherent multi-layer texture map video and results in a compact representation with minimal loss of information allowing high-quality free-viewpoint rendering.

Video-based characters: creating new human performances from a multi-view video database

A warping-based texture synthesis approach that uses the retrieved most-similar database frames to synthesize spatio-temporally coherent target video frames to create realistic videos of people, even if the target motions and camera views are different from the database content.