Face-to-Parameter Translation for Game Character Auto-Creation

@article{Shi2019FacetoParameterTF,
  title={Face-to-Parameter Translation for Game Character Auto-Creation},
  author={Tianyang Shi and Yi Yuan and Changjie Fan and Zhengxia Zou and Zhenwei Shi and Y. Liu},
  journal={2019 IEEE/CVF International Conference on Computer Vision (ICCV)},
  year={2019},
  pages={161-170}
}
  • Tianyang Shi, Yi Yuan, +3 authors Y. Liu
  • Published 2019
  • Computer Science
  • 2019 IEEE/CVF International Conference on Computer Vision (ICCV)
Character customization system is an important component in Role-Playing Games (RPGs), where players are allowed to edit the facial appearance of their in-game characters with their own preferences rather than using default templates. [...] Key Method To effectively minimize the distance between the created face and the real one, two loss functions, i.e. a "discriminative loss" and a "facial content loss", are specifically designed.Expand
Neutral Face Game Character Auto-Creation via PokerFace-GAN
TLDR
A novel method named "PokerFace-GAN" for neutral face game character auto-creation that builds a differentiable character renderer which is more flexible than the previous methods in multi-view rendering cases and takes advantage of the adversarial training to effectively disentangle the expression parameters from the identity parameters and thus generate player-preferred neutral face (expression-less) characters. Expand
MeInGame: Create a Game Character Face from a Single Portrait
TLDR
An automatic character face creation method that predicts both facial shape and texture from a single portrait, and it can be integrated into most existing 3D games, and outperforms state-of-the-art methods used in games. Expand
Face Translation based on Semantic Style Transfer and Rendering from One Single Image
  • Peizhen Lin, Baoyu Liu, Lei Wang, Zetong Lei, Jun Cheng
  • Computer Science
  • ICSCA
  • 2021
TLDR
One face translation framework for translating human faces to that with visual effects from one single prototype image, which can generate images with prototype's visual effects while preserving the original person's identification and expression information is presented. Expand
Automatic Generation of 3D Natural Anime-like Non-Player Characters with Machine Learning
TLDR
This paper proposes a novel system for generating a rich variety of 3D anime-like NPCs in real time to make the scenes look more natural in role-playing games. Expand
Stylized Neural Painting
TLDR
This paper proposes an image-to-painting translation method that generates vivid and realistic painting artworks with controllable styles and designs a novel neural renderer which imitates the behavior of the vector renderer and frames the stroke prediction as a parameter searching process that maximizes the similarity between the input and the rendering output. Expand
Image-to-Image Translation Method for Game-Character Face Generation
TLDR
This work applies two feature loss functions specialized for faces in an image-to-image translation technique based on the generative adversarial network framework that is superior to other recent image- to-image algorithms in case of face deformations. Expand
Unsupervised Learning Facial Parameter Regressor for Action Unit Intensity Estimation via Differentiable Renderer
TLDR
Quantitative evaluations are performed on two public databases BP4D and DISFA, which demonstrates that the proposed method can achieve comparable or better performance than the state-of-the-art methods and demonstrate the validity of the method in the wild. Expand
Unsupervised Facial Action Unit Intensity Estimation via Differentiable Optimization
TLDR
This work proposes an unsupervised framework GE-Net for facial AU intensity estimation from a single image, without requiring any annotated AU data, and demonstrates that the method can achieve state-of-the-art results compared with existing methods. Expand
A State-of-the-Art Review on Image Synthesis With Generative Adversarial Networks
TLDR
The recent research on GANs in the field of image processing, including image synthesis, image generation, image semantic editing, image-to-image translation, image super-resolution, image inpainting, and cartoon generation is introduced. Expand
Parametric fur from an image
TLDR
This work proposes a method to automatically estimate appropriate parameters from an image using a pre-trained deep convolutional neural network model and demonstrates that the proposed method can estimate fur parameters appropriately for a wide range of fur types. Expand
...
1
2
...

References

SHOWING 1-10 OF 69 REFERENCES
Fast Patch-based Style Transfer of Arbitrary Style
TLDR
A simpler optimization objective based on local matching that combines the content structure and style textures in a single layer of the pretrained network is proposed that has desirable properties such as a simpler optimization landscape, intuitive parameter tuning, and consistent frame-by-frame performance on video. Expand
MoFA: Model-Based Deep Convolutional Face Autoencoder for Unsupervised Monocular Reconstruction
TLDR
A novel model-based deep convolutional autoencoder that addresses the highly challenging problem of reconstructing a 3D human face from a single in-the-wild color image and can be trained end-to-end in an unsupervised manner, which renders training on very large real world data feasible. Expand
Attribute-Guided Face Generation Using Conditional CycleGAN
TLDR
This work condition the CycleGAN and proposes conditional CycleGAN, which is designed to handle unpaired training data because the training low/high-res and high-res attribute images may not necessarily align with each other, and to allow easy control of the appearance of the generated face via the input attributes. Expand
Style Transfer Via Texture Synthesis
TLDR
This paper proposes a novel style transfer algorithm that extends the texture synthesis work of Kwatra et al. (2005), while aiming to get stylized images that are closer in quality to the CNN ones. Expand
End-to-End 3D Face Reconstruction with Deep Neural Networks
TLDR
This work proposes a DNN-based approach for End-to-End 3D FAce Reconstruction (UH-E2FAR) from a single 2D image with a multi-task loss function and a fusion convolutional neural network (CNN) to improve facial expression reconstruction. Expand
PairedCycleGAN: Asymmetric Style Transfer for Applying and Removing Makeup
TLDR
This paper introduces an automatic method for editing a portrait photo so that the subject appears to be wearing makeup in the style of another person in a reference photo using a new framework of cycle-consistent generative adversarial networks. Expand
GANFIT: Generative Adversarial Network Fitting for High Fidelity 3D Face Reconstruction
TLDR
This paper utilizes GANs to train a very powerful generator of facial texture in UV space and revisits the original 3D Morphable Models (3DMMs) fitting approaches making use of non-linear optimization to find the optimal latent parameters that best reconstruct the test image but under a new perspective. Expand
Parametric T-Spline Face Morphable Model for Detailed Fitting in Shape Subspace
TLDR
A parametric T-spline morphable model (T- SplineMM) for 3D face representation is proposed, which has great advantages of fitting data from an unknown source accurately and robustness to missing data and noise, and to demonstrate the effectiveness of the model. Expand
Unsupervised Creation of Parameterized Avatars
TLDR
A generalization bound is defined that is based on discrepancy, and a GAN is employed to implement a network solution that corresponds to this bound and is shown to solve the problem of automatically creating avatars. Expand
A morphable model for the synthesis of 3D faces
TLDR
A new technique for modeling textured 3D faces by transforming the shape and texture of the examples into a vector space representation, which regulates the naturalness of modeled faces avoiding faces with an “unlikely” appearance. Expand
...
1
2
3
4
5
...