Face2Face: Real-Time Face Capture and Reenactment of RGB Videos
- Justus Thies, M. Zollhöfer, M. Stamminger, C. Theobalt, M. Nießner
- Computer ScienceComputer Vision and Pattern Recognition
- 27 June 2016
A novel approach for real-time facial reenactment of a monocular target video sequence (e.g., Youtube video) that addresses the under-constrained problem of facial identity recovery from monocular video by non-rigid model-based bundling and re-render the manipulated output video in a photo-realistic fashion.
Real-time 3D reconstruction at scale using voxel hashing
- M. Nießner, M. Zollhöfer, S. Izadi, M. Stamminger
- Computer ScienceACM Transactions on Graphics
- 1 November 2013
An online system for large and fine scale volumetric reconstruction based on a memory and speed efficient data structure that compresses space, and allows for real-time access and updates of implicit surface data, without the need for a regular or hierarchical grid data structure.
BundleFusion: real-time globally consistent 3D reconstruction using on-the-fly surface re-integration
- Angela Dai, M. Nießner, M. Zollhöfer, S. Izadi, C. Theobalt
- Computer ScienceTOGS
- 5 April 2016
This work systematically addresses issues with a novel, real-time, end-to-end reconstruction framework, which outperforms state-of-the-art online systems with quality on par to offline methods, but with unprecedented speed and scan completeness.
Deferred Neural Rendering: Image Synthesis using Neural Textures
- Justus Thies, M. Zollhöfer, M. Nießner
- Computer Science
- 28 April 2019
This work proposes Neural Textures, which are learned feature maps that are trained as part of the scene capture process that can be utilized to coherently re-render or manipulate existing video content in both static and dynamic environments at real-time rates.
MoFA: Model-Based Deep Convolutional Face Autoencoder for Unsupervised Monocular Reconstruction
- Ayush Tewari, M. Zollhöfer, C. Theobalt
- Computer ScienceIEEE International Conference on Computer Vision…
- 30 March 2017
A novel model-based deep convolutional autoencoder that addresses the highly challenging problem of reconstructing a 3D human face from a single in-the-wild color image and can be trained end-to-end in an unsupervised manner, which renders training on very large real world data feasible.
VolumeDeform: Real-Time Volumetric Non-rigid Reconstruction
- M. Innmann, M. Zollhöfer, M. Nießner, C. Theobalt, M. Stamminger
- Computer ScienceEuropean Conference on Computer Vision
- 27 March 2016
This work presents a novel approach for the reconstruction of dynamic geometric shapes using a single hand-held consumer-grade RGB-D sensor at real-time rates, and casts finding the optimal deformation of space as a non-linear regularized variational optimization problem by enforcing local smoothness and proximity to the input constraints.
Deep video portraits
- Hyeongwoo Kim, Pablo Garrido, C. Theobalt
- Computer ScienceACM Transactions on Graphics
- 29 May 2018
The first to transfer the full 3D head position, head rotation, face expression, eye gaze, and eye blinking from a source actor to a portrait video of a target actor using only an input video is presented.
DeepVoxels: Learning Persistent 3D Feature Embeddings
- V. Sitzmann, Justus Thies, Felix Heide, M. Nießner, Gordon Wetzstein, M. Zollhöfer
- Computer ScienceComputer Vision and Pattern Recognition
- 3 December 2018
This work proposes DeepVoxels, a learned representation that encodes the view-dependent appearance of a 3D scene without having to explicitly model its geometry, based on a Cartesian 3D grid of persistent embedded features that learn to make use of the underlying3D scene structure.
StyleRig: Rigging StyleGAN for 3D Control Over Portrait Images
- Ayush Tewari, Mohamed A. Elgharib, C. Theobalt
- Computer ScienceComputer Vision and Pattern Recognition
- 31 March 2020
This work presents the first method to provide a face rig-like control over a pretrained and fixed StyleGAN via a 3DMM via a new rigging network, \textit{RigNet} is trained between the 3D MM's semantic parameters and StyleGAN's input.
Self-Supervised Multi-level Face Model Learning for Monocular Reconstruction at Over 250 Hz
- Ayush Tewari, M. Zollhöfer, C. Theobalt
- Computer ScienceIEEE/CVF Conference on Computer Vision and…
- 7 December 2017
This first approach that jointly learns a regressor for face shape, expression, reflectance and illumination on the basis of a concurrently learned parametric face model is presented, which compares favorably to the state-of-the-art in terms of reconstruction quality, better generalizes to real world faces, and runs at over 250 Hz.
...
...