Corpus ID: 226254298

Disentangling Latent Space for Unsupervised Semantic Face Editing

@article{Liu2020DisentanglingLS,
  title={Disentangling Latent Space for Unsupervised Semantic Face Editing},
  author={Kanglin Liu and Gaofeng Cao and Fei Zhou and Bozhi Liu and Jiang Duan and Guoping Qiu},
  journal={ArXiv},
  year={2020},
  volume={abs/2011.02638}
}
Editing facial images created by StyleGAN is a popular research topic with important applications. Through editing the latent vectors, it is possible to control the facial attributes such as smile, age, \textit{etc}. However, facial attributes are entangled in the latent space and this makes it very difficult to independently control a specific attribute without affecting the others. The key to developing neat semantic control is to completely disentangle the latent space and perform image… Expand

Figures and Tables from this paper

References

SHOWING 1-10 OF 38 REFERENCES
AttGAN: Facial Attribute Editing by Only Changing What You Want
TLDR
The proposed method is extended for attribute style manipulation in an unsupervised manner and outperforms the state-of-the-art on realistic attribute editing with other facial details well preserved. Expand
Interpreting the Latent Space of GANs for Semantic Face Editing
TLDR
This work proposes a novel framework, called InterFaceGAN, for semantic face editing by interpreting the latent semantics learned by GANs, and finds that the latent code of well-trained generative models actually learns a disentangled representation after linear transformations. Expand
A Style-Based Generator Architecture for Generative Adversarial Networks
TLDR
An alternative generator architecture for generative adversarial networks is proposed, borrowing from style transfer literature, that improves the state-of-the-art in terms of traditional distribution quality metrics, leads to demonstrably better interpolation properties, and also better disentangles the latent factors of variation. Expand
Analyzing and Improving the Image Quality of StyleGAN
TLDR
This work redesigns the generator normalization, revisit progressive growing, and regularize the generator to encourage good conditioning in the mapping from latent codes to images, and thereby redefines the state of the art in unconditional image modeling. Expand
Disentangled Image Generation Through Structured Noise Injection
TLDR
It is shown that disentanglement in the first layer of the generator network leads to disentangling the latent space in the generated image, and through a grid-based structure, several aspects of disentangled without complicating the network architecture and without requiring labels are achieved. Expand
Arbitrary Style Transfer in Real-Time with Adaptive Instance Normalization
TLDR
This paper presents a simple yet effective approach that for the first time enables arbitrary style transfer in real-time, comparable to the fastest existing approach, without the restriction to a pre-defined set of styles. Expand
Deep Learning Face Attributes in the Wild
TLDR
A novel deep learning framework for attribute prediction in the wild that cascades two CNNs, LNet and ANet, which are fine-tuned jointly with attribute tags, but pre-trained differently. Expand
Generative Image Modeling Using Style and Structure Adversarial Networks
TLDR
This paper factorize the image generation process and proposes Style and Structure Generative Adversarial Network, a model that is interpretable, generates more realistic images and can be used to learn unsupervised RGBD representations. Expand
Exploring the structure of a real-time, arbitrary neural artistic stylization network
TLDR
A method which combines the flexibility of the neural algorithm of artistic style with the speed of fast style transfer networks to allow real-time stylization using any content/style image pair and is successfully trained on a corpus of roughly 80,000 paintings. Expand
Large Scale GAN Training for High Fidelity Natural Image Synthesis
TLDR
It is found that applying orthogonal regularization to the generator renders it amenable to a simple "truncation trick," allowing fine control over the trade-off between sample fidelity and variety by reducing the variance of the Generator's input. Expand
...
1
2
3
4
...