Share This Author
SiCloPe: Silhouette-Based Clothed People
- Ryota Natsume, Shunsuke Saito, S. Morishima
- Computer ScienceIEEE/CVF Conference on Computer Vision and…
- 31 December 2018
We introduce a new silhouette-based representation for modeling clothed human bodies using deep generative models. Our method can reconstruct a complete and textured 3D model of a person wearing…
Facial performance sensing head-mounted display
A novel HMD that enables 3D facial performance-driven animation in real-time that is suitable for social interactions in virtual worlds and a short calibration step to readjust the Gaussian mixture distribution of the mapping before each use is proposed.
Attention-based Multi-Patch Aggregation for Image Aesthetic Assessment
- Kekai Sheng, Weiming Dong, Chongyang Ma, Xing Mei, Feiyue Huang, B. Hu
- Computer ScienceACM Multimedia
- 15 October 2018
A novel multi-patch aggregation method for image aesthetic assessment using an attention-based mechanism that adaptively adjusts the weight of each patch during the training process to improve learning efficiency and outperforms existing methods by a large margin.
Single-view hair modeling using a hairstyle database
A novel data-driven framework that can digitize complete and highly complex 3D hairstyles from a single-view photograph and is compared with state-of-the-art hair modeling algorithms is introduced.
LGM-Net: Learning to Generate Matching Networks for Few-Shot Learning
A novel meta-learning approach for few-shot classification, which learns transferable prior knowledge across tasks and directly produces network parameters for similar unseen tasks with training samples is proposed.
3D hair synthesis using volumetric variational autoencoders
- Shunsuke Saito, Liwen Hu, Chongyang Ma, Hikaru Ibayashi, Linjie Luo, Hao Li
- Computer ScienceACM Trans. Graph.
- 4 December 2018
This work proposes to represent the manifold of 3D hairstyles implicitly through a compact latent space of a volumetric variational autoencoder (VAE), which is significantly more robust and can handle a much wider variation of hairstyles than state-of-the-art data-driven hair modeling techniques with challenging inputs.
Unconstrained realtime facial performance capture
- Pei-Lun Hsieh, Chongyang Ma, Jihun Yu, Hao Li
- Computer ScienceIEEE Conference on Computer Vision and Pattern…
- 7 June 2015
This work introduces a realtime facial tracking system specifically designed for performance capture in unconstrained settings using a consumer-level RGB-D sensor and demonstrates robust and high-fidelity facial tracking on a wide range of subjects with highly incomplete and largely occluded data.
Deep Generative Modeling for Scene Synthesis via Hybrid Representations
A deep generative scene modeling technique using a feed-forward neural network that maps a prior distribution to the distribution of primary objects in indoor scenes, and introduces a 3D object arrangement representation that models the locations and orientations of objects, based on their size and shape attributes.
Dynamic element textures
The method, which is called dynamic element textures, aims to produce controllable repetitions through a combination of constrained optimization and data driven computation (synthesizing details), producing a range of artistic effects that previously required disparate and specialized techniques.
Robust hair capture using simulated examples
A data-driven hair capture framework based on example strands generated through hair simulation that can robustly reconstruct faithful 3D hair models from unprocessed input point clouds with large amounts of outliers and ensures an improved control during hair digitization and avoid implausible hair synthesis for a wide range of hairstyles.