• Publications
  • Influence
SiCloPe: Silhouette-Based Clothed People
We introduce a new silhouette-based representation for modeling clothed human bodies using deep generative models. Our method can reconstruct a complete and textured 3D model of a person wearingExpand
Facial performance sensing head-mounted display
TLDR
A novel HMD that enables 3D facial performance-driven animation in real-time that is suitable for social interactions in virtual worlds and a short calibration step to readjust the Gaussian mixture distribution of the mapping before each use is proposed. Expand
Attention-based Multi-Patch Aggregation for Image Aesthetic Assessment
TLDR
A novel multi-patch aggregation method for image aesthetic assessment using an attention-based mechanism that adaptively adjusts the weight of each patch during the training process to improve learning efficiency and outperforms existing methods by a large margin. Expand
Single-view hair modeling using a hairstyle database
TLDR
A novel data-driven framework that can digitize complete and highly complex 3D hairstyles from a single-view photograph and is compared with state-of-the-art hair modeling algorithms is introduced. Expand
Discrete element textures
A variety of phenomena can be characterized by repetitive small scale elements within a large scale domain. Examples include a stack of fresh produce, a plate of spaghetti, or a mosaic pattern.Expand
LGM-Net: Learning to Generate Matching Networks for Few-Shot Learning
In this work, we propose a novel meta-learning approach for few-shot classification, which learns transferable prior knowledge across tasks and directly produces network parameters for similar unseenExpand
3D hair synthesis using volumetric variational autoencoders
TLDR
This work proposes to represent the manifold of 3D hairstyles implicitly through a compact latent space of a volumetric variational autoencoder (VAE), which is significantly more robust and can handle a much wider variation of hairstyles than state-of-the-art data-driven hair modeling techniques with challenging inputs. Expand
Unconstrained realtime facial performance capture
TLDR
This work introduces a realtime facial tracking system specifically designed for performance capture in unconstrained settings using a consumer-level RGB-D sensor and demonstrates robust and high-fidelity facial tracking on a wide range of subjects with highly incomplete and largely occluded data. Expand
Dynamic element textures
TLDR
The method, which is called dynamic element textures, aims to produce controllable repetitions through a combination of constrained optimization and data driven computation (synthesizing details), producing a range of artistic effects that previously required disparate and specialized techniques. Expand
Deep Volumetric Video From Very Sparse Multi-view Performance Capture
TLDR
This work focuses on the task of template-free, per-frame 3D surface reconstruction from as few as three RGB sensors, for which conventional visual hull or multi-view stereo methods fail to generate plausible results. Expand
...
1
2
3
4
...