Representation Decomposition For Image Manipulation And Beyond

@article{Chen2020RepresentationDF,
  title={Representation Decomposition For Image Manipulation And Beyond},
  author={Shang-Fu Chen and Jia-Wei Yan and Ya Su and Yu-Chiang Frank Wang},
  journal={2021 IEEE International Conference on Image Processing (ICIP)},
  year={2020},
  pages={1169-1173}
}
Representation disentanglement aims at learning interpretable features, so that the output can be recovered or manipulated accordingly. While existing works like infoGAN [1] and ACGAN [2] exist, they choose to derive disjoint attribute code for feature disentanglement, which is not applicable for existing/trained generative models. In this paper, we propose a decomposition-GAN (dec-GAN), which is able to achieve the decomposition of an existing latent representation into content and attribute… 

Figures and Tables from this paper

References

SHOWING 1-10 OF 20 REFERENCES

A Unified Feature Disentangler for Multi-Domain Image Translation and Manipulation

A novel and unified deep learning framework which is capable of learning domain-invariant representation from data across multiple domains and exhibits superior performance of unsupervised domain adaptation is presented.

Conditional Image Synthesis with Auxiliary Classifier GANs

A variant of GANs employing label conditioning that results in 128 x 128 resolution image samples exhibiting global coherence is constructed and it is demonstrated that high resolution samples provide class information not present in low resolution samples.

Gradient-based learning applied to document recognition

This paper reviews various methods applied to handwritten character recognition and compares them on a standard handwritten digit recognition task, and Convolutional neural networks are shown to outperform all other techniques.

Diverse Image-to-Image Translation via Disentangled Representations

This work presents an approach based on disentangled representation for generating diverse outputs without paired training images that can generate diverse and realistic images on a wide range of tasks without pairedTraining data.

InterFaceGAN: Interpreting the Disentangled Face Representation Learned by GANs

A framework called InterFaceGAN is proposed to interpret the disentangled face representation learned by the state-of-the-art GAN models and study the properties of the facial semantics encoded in the latent space to suggest that learning to synthesize faces spontaneously brings a disentangling and controllable face representation.

Unsupervised Discovery of Interpretable Directions in the GAN Latent Space

This paper introduces an unsupervised method to identify interpretable directions in the latent space of a pretrained GAN model by a simple model-agnostic procedure, and finds directions corresponding to sensible semantic manipulations without any form of (self-)supervision.

Emerging Disentanglement in Auto-Encoder Based Unsupervised Image Content Transfer

We study the problem of learning to map, in an unsupervised way, between domains A and B, such that the samples b in B contain all the information that exists in samples a in A and some additional

Disentangling Latent Space for VAE by Label Relevant/Irrelevant Dimensions

  • Zhilin ZhengLi Sun
  • Computer Science
    2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)
  • 2019
This paper presents a method for disentangling the latent space into the label relevant and irrelevant dimensions, zs and zu, for a single input, and shows that this method can be extended to GAN by adding a discriminator in the pixel domain so that it produces high quality and diverse images.

beta-VAE: Learning Basic Visual Concepts with a Constrained Variational Framework

Learning an interpretable factorised representation of the independent data generative factors of the world without supervision is an important precursor for the development of artificial

Detach and Adapt: Learning Cross-Domain Disentangled Deep Representation

This work proposes a novel deep learning model of Cross-Domain Representation Disentangler (CDRD), which can be applied for solving classification tasks of unsupervised domain adaptation, and performs favorably against state-of-the-art image disentanglement and translation methods.