• Corpus ID: 220056269

InfoGAN-CR and ModelCentrality: Self-supervised Model Training and Selection for Disentangling GANs

@inproceedings{Lin2020InfoGANCRAM,
  title={InfoGAN-CR and ModelCentrality: Self-supervised Model Training and Selection for Disentangling GANs},
  author={Zinan Lin and Kiran Koshy Thekumparampil and Giulia C. Fanti and Sewoong Oh},
  booktitle={ICML},
  year={2020}
}
Disentangled generative models map a latent code vector to a target space, while enforcing that a subset of the learned latent codes are interpretable and associated with distinct properties of the target distribution. Recent advances have been dominated by Variational AutoEncoder (VAE)-based methods, while training disentangled generative adversarial networks (GANs) remains challenging. In this work, we show that the dominant challenges facing disentangled GANs can be mitigated through the use… 

Figures and Tables from this paper

Do Generative Models Know Disentanglement? Contrastive Learning is All You Need
TLDR
An unsupervised and model-agnostic method that achieves the state-of-the-art disentanglement given pretrained non-disentangled generative models, including GAN, VAE, and Flow.
Disentangled Representations from Non-Disentangled Models
TLDR
This paper proposes to extract disentangled representations from the state-ofthe-art generative models trained without disentangling terms in their objectives, and employs little or no hyperparameters when learning representations while achieving results on par with existing state of theart models.
Learning Disentangled Representations with Latent Variation Predictability
TLDR
The proposed variation predictability is a general constraint that is applicable to the VAE and GAN frameworks for boosting disentangled latent representations and correlates well with existing ground-truth-required metrics and the proposed algorithm is effective for disentanglement learning.
An Improved Semi-Supervised VAE for Learning Disentangled Representations
TLDR
This work focuses on semi-supervised disentanglement learning and extends work by Locatello et al. (2019) by introducing another source of supervision that is denote as label replacement by replacing the inferred representation associated with a data point with its ground-truth representation whenever it is available during training.
Full Encoder: Make Autoencoders Learn Like PCA
TLDR
Full Encoder is a novel unified autoencoder framework as a correspondence to PCA in the non-linear domain that can be used to determine the degrees of freedom in a non- linear system, and is useful for data compression or anomaly detection.
Challenging β-VAE with β<1 for Disentanglement Via Dynamic Learning
TLDR
A novel DynamicVAE is proposed that leverages an incremental PI controller, a variant of proportional–integral–derivative controller (PID) controller, and moving average as well as hybrid annealing method to effectively decouple the reconstruction and disentanglement learning.
DynamicVAE: Decoupling Reconstruction Error and Disentangled Representation Learning
TLDR
Evaluation results on three benchmark datasets demonstrate that DynamicVAE significantly improves the reconstruction accuracy while achieving disentanglement comparable to the best of existing methods.
Challenging $\beta$-VAE with $\beta < 1$ for Disentanglement Via Dynamic Learning
TLDR
A novel DynamicVAE is proposed that leverages an incremental PI controller, a variant of proportional-integral-derivative controller (PID) controller, and moving average as well as hybrid annealing method to effectively decouple the reconstruction and disentanglement learning.
Unsupervised Foreground-Background Segmentation with Equivariant Layered GANs
We propose an unsupervised foreground-background segmentation method via training a segmentation network on the synthetic pseudo segmentation dataset generated from GANs, which are trained from a
A Large-scale Study on Unsupervised Outlier Model Selection: Do Internal Strategies Suffice?
TLDR
This work studies the feasibility of employing internal model evaluation strategies for selecting a model for outlier detection, and finds that none would be practically useful, as they select models only comparable to a state-of-the-art detector (with random configuration).
...
1
2
3
4
...

References

SHOWING 1-10 OF 74 REFERENCES
High-Fidelity Synthesis with Disentangled Representation
TLDR
This work proposes an Information-Distillation Generative Adversarial Network (ID-GAN), a simple yet generic framework that easily incorporates the existing state-of-the-art models for both disentanglement learning and high-fidelity synthesis, and demonstrates photo-realistic high-resolution image synthesis results for the first time using the disentangled representations.
Learning Disentangled Representations with Semi-Supervised Deep Generative Models
TLDR
This work proposes to learn disentangled representations that encode distinct aspects of the data into separate variables using model architectures that generalise from standard VAEs, employing a general graphical model structure in the encoder and decoder.
Hyperprior Induced Unsupervised Disentanglement of Latent Representations
TLDR
It is argued that statistical independence in the latent space of VAEs can be enforced in a principled hierarchical Bayesian manner to augment the standard VAE with an inverse-Wishart (IW) prior on the covariance matrix of the latent code.
beta-VAE: Learning Basic Visual Concepts with a Constrained Variational Framework
Learning an interpretable factorised representation of the independent data generative factors of the world without supervision is an important precursor for the development of artificial
A Heuristic for Unsupervised Model Selection for Variational Disentangled Representation Learning
TLDR
The approach, Unsupervised Disentanglement Ranking (UDR), leverages the recent theoretical results that explain why variational autoencoders disentangle, to quantify the quality of disentangled representations by performing pairwise comparisons between trained model representations.
OOGAN: Disentangling GAN with One-Hot Sampling and Orthogonal Regularization
TLDR
It is shown that GANs have a natural advantage in disentangling with an alternating latent variable (noise) sampling method that is straightforward and robust and a brand-new perspective on designing the structure of the generator and discriminator is provided, demonstrating that a minor structural change and an orthogonal regularization on model weights entails an improved disentanglement.
VEEGAN: Reducing Mode Collapse in GANs using Implicit Variational Learning
TLDR
VEEGAN is introduced, which features a reconstructor network, reversing the action of the generator by mapping from data to noise, and resists mode collapsing to a far greater extent than other recent GAN variants, and produces more realistic samples.
Weakly Supervised Disentanglement by Pairwise Similarities
TLDR
Experimental results demonstrate that utilizing weak supervision improves the performance of the disentanglement method substantially.
Challenging Common Assumptions in the Unsupervised Learning of Disentangled Representations
TLDR
This paper theoretically shows that the unsupervised learning of disentangled representations is fundamentally impossible without inductive biases on both the models and the data, and trains more than 12000 models covering most prominent methods and evaluation metrics on seven different data sets.
Learning Deep Disentangled Embeddings with the F-Statistic Loss
TLDR
A new paradigm for discovering disentangled representations of class structure is proposed and a novel loss function based on the $F$ statistic is proposed, which describes the separation of two or more distributions.
...
1
2
3
4
5
...