SofGAN: A Portrait Image Generator with Dynamic Styling

@inproceedings{Chen2020SofGANAP,
  title={SofGAN: A Portrait Image Generator with Dynamic Styling},
  author={Anpei Chen and Ruiyang Liu and Ling Xie and Zhang Chen and Hao Su and Jingyi Yu},
  year={2020}
}
  • Anpei Chen, Ruiyang Liu, +3 authors Jingyi Yu
  • Published 7 July 2020
  • Computer Science
Fig. 1. First row: our portrait image generator allows explicit control over pose, shape and texture styles. Starting from the source image, we explicitly change it’s head pose (2nd image), facial/hair contour (3rd image) and texture styles. Second row: interactive image generation from incomplete segmaps. Our method allow users to gradually add parts to the segmap and generate colorful images on-the-fly. 
FreeStyleGAN: Free-view Editable Portrait Rendering with the Camera Manifold
Fig. 1. We introduce a new approach that generates an image with StyleGAN defined by a precise 3D camera. This enables faces synthesized with StyleGAN to be used in 3D free-viewpoint rendering, whileExpand
StyleNeRF: A Style-based 3D-Aware Generator for High-resolution Image Synthesis
  • Jiatao Gu, Lingjie Liu, Peng Wang, C. Theobalt
  • Computer Science, Mathematics
  • ArXiv
  • 2021
We propose StyleNeRF, a 3D-aware generative model for photo-realistic highresolution image synthesis with high multi-view consistency, which can be trained on unstructured 2D images. ExistingExpand
Eyes Tell All: Irregular Pupil Shapes Reveal GAN-generated Faces
  • Hui Guo, Shu Hu, Xin Wang, Ming-Ching Chang, Siwei Lyu
  • Computer Science
  • ArXiv
  • 2021
TLDR
This work shows that GAN-generated faces can be exposed via irregular pupil shapes, and describes an automatic method to extract the pupils from two eyes and analysis their shapes for exposing the GAN -generated faces. Expand
DyStyle: Dynamic Neural Network for Multi-Attribute-Conditioned Style Editing
  • Bingchuan Li, Shaofei Cai, +4 authors Zili Yi
  • Computer Science
  • ArXiv
  • 2021
TLDR
A Dynamic Style Manipulation Network (DyStyle) whose structure and parameters vary by input samples, to perform nonlinear and adaptive manipulation of latent codes for flexible and precise attribute control. Expand
MVSNeRF: Fast Generalizable Radiance Field Reconstruction from Multi-View Stereo
TLDR
This work proposes a generic deep neural network that can reconstruct radiance fields from only three nearby input views via fast network inference, and leverages plane-swept cost volumes for geometry-aware scene reasoning, and combines this with physically based volume rendering for neural radiance field reconstruction. Expand
VariTex: Variational Neural Face Textures
TLDR
VariTex is proposed to the best of the authors' knowledge the first method that learns a variational latent feature space of neural face textures, which allows sampling of novel identities and can generate geometrically consistent images of novel identity. Expand

References

SHOWING 1-10 OF 44 REFERENCES
Warp-guided GANs for single-photo facial animation
TLDR
This paper introduces a novel method for realtime portrait animation in a single photo that factorizes out the nonlinear geometric transformations exhibited in facial expressions by lightweight 2D warps and leaves the appearance detail synthesis to conditional generative neural networks for high-fidelity facial animation generation. Expand
First Order Motion Model for Image Animation
TLDR
This framework decouple appearance and motion information using a self-supervised formulation and uses a representation consisting of a set of learned keypoints along with their local affine transformations to support complex motions. Expand
PIFu: Pixel-Aligned Implicit Function for High-Resolution Clothed Human Digitization
TLDR
The proposed Pixel-aligned Implicit Function (PIFu), an implicit representation that locally aligns pixels of 2D images with the global context of their corresponding 3D object, achieves state-of-the-art performance on a public benchmark and outperforms the prior work for clothed human digitization from a single image. Expand
Editing in Style: Uncovering the Local Semantics of GANs
TLDR
A simple and effective method for making local, semantically-aware edits to a target output image via a novel manipulation of style vectors that relies on the emergent disentanglement of semantic objects learned by StyleGAN during its training. Expand
MaskGAN: Towards Diverse and Interactive Facial Image Manipulation
TLDR
This work proposes a novel framework termed MaskGAN, enabling diverse and interactive face manipulation, and finds that semantic masks serve as a suitable intermediate representation for flexible face manipulation with fidelity preservation. Expand
Deferred Neural Rendering: Image Synthesis using Neural Textures
TLDR
This work proposes Neural Textures, which are learned feature maps that are trained as part of the scene capture process that can be utilized to coherently re-render or manipulate existing video content in both static and dynamic environments at real-time rates. Expand
Rotate-and-Render: Unsupervised Photorealistic Face Rotation From Single-View Images
TLDR
This work proposes a novel unsupervised framework that can synthesize photo-realistic rotated faces using only single-view image collections in the wild, and proves that rotating faces in the 3D space back and forth and re-rendering them to the 2D plane can serve as a strong self-supervision. Expand
Semantic Image Synthesis With Spatially-Adaptive Normalization
TLDR
S spatially-adaptive normalization is proposed, a simple but effective layer for synthesizing photorealistic images given an input semantic layout that allows users to easily control the style and content of image synthesis results as well as create multi-modal results. Expand
HoloGAN: Unsupervised Learning of 3D Representations From Natural Images
TLDR
HoloGAN is the first generative model that learns 3D representations from natural images in an entirely unsupervised manner and is shown to be able to generate images with similar or higher visual quality than other generative models. Expand
Learning Implicit Fields for Generative Shape Modeling
  • Zhiqin Chen, Hao Zhang
  • Computer Science
  • 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)
  • 2019
TLDR
By replacing conventional decoders by the implicit decoder for representation learning and shape generation, this work demonstrates superior results for tasks such as generative shape modeling, interpolation, and single-view 3D reconstruction, particularly in terms of visual quality. Expand
...
1
2
3
4
5
...