Corpus ID: 236318260

LARGE: Latent-Based Regression through GAN Semantics

@article{Nitzan2021LARGELR,
  title={LARGE: Latent-Based Regression through GAN Semantics},
  author={Yotam Nitzan and Rinon Gal and Ofir Brenner and Daniel Cohen-Or},
  journal={ArXiv},
  year={2021},
  volume={abs/2107.11186}
}
We propose a novel method for solving regression tasks using few-shot or weak supervision. At the core of our method is the fundamental observation that GANs are incredibly successful at encoding semantic information within their latent space, even in a completely unsupervised setting. For modern generative frameworks, this semantic encoding manifests as smooth, linear directions which affect image attributes in a disentangled manner. These directions have been widely used in GAN-based image… Expand
StyleGAN-NADA: CLIP-Guided Domain Adaptation of Image Generators
TLDR
Leveraging the semantic power of large scale Contrastive-Language-Image-Pretraining (CLIP) models, this work presents a text-driven method that allows shifting a generative model to new domains, without having to collect even a single image from those domains. Expand
StyleAlign: Analysis and Applications of Aligned StyleGAN Models
TLDR
This first detailed exploration of model alignment is performed, also focusing on StyleGAN, and finds that the child model’s latent spaces are semantically aligned with those of the parent, inheriting incredibly rich semantics, even for distant data domains such as human faces and churches. Expand

References

SHOWING 1-10 OF 78 REFERENCES
Interpreting the Latent Space of GANs for Semantic Face Editing
TLDR
This work proposes a novel framework, called InterFaceGAN, for semantic face editing by interpreting the latent semantics learned by GANs, and finds that the latent code of well-trained generative models actually learns a disentangled representation after linear transformations. Expand
Ensembling with Deep Generative Views
TLDR
This work uses StyleGAN2 as the source of generative augmentations and investigates whether such views can be applied to real images to benefit downstream analysis tasks such as image classification. Expand
ReStyle: A Residual-Based StyleGAN Encoder via Iterative Refinement
TLDR
This work presents a novel inversion scheme that extends current encoderbased inversion methods by introducing an iterative refinement mechanism, and presents a residual-based encoder, named ReStyle, which attains improved accuracy compared to current state-of-the-art encoder-based methods with a negligible increase in inference time. Expand
Learning Transferable Visual Models From Natural Language Supervision
TLDR
It is demonstrated that the simple pre-training task of predicting which caption goes with which image is an efficient and scalable way to learn SOTA image representations from scratch on a dataset of 400 million (image, text) pairs collected from the internet. Expand
Self-Supervised Viewpoint Learning From Image Collections
TLDR
This work proposes a novel learning framework which incorporates an analysis-by-synthesis paradigm to reconstruct images in a viewpoint aware manner with a generative network, along with symmetry and adversarial constraints to successfully supervise the authors' viewpoint estimation network. Expand
LSUN: Construction of a Large-scale Image Dataset using Deep Learning with Humans in the Loop
TLDR
This work proposes to amplify human effort through a partially automated labeling scheme, leveraging deep learning with humans in the loop, and constructs a new image dataset, LSUN, which contains around one million labeled images for each of 10 scene categories and 20 object categories. Expand
Classification Accuracy Score for Conditional Generative Models
TLDR
This work uses class-conditional generative models from a number of model classes---variational autoencoders, autoregressive models, and generative adversarial networks (GANs)---to infer the class labels of real data and reveals some surprising results not identified by traditional metrics. Expand
A Style-Based Generator Architecture for Generative Adversarial Networks
TLDR
An alternative generator architecture for generative adversarial networks is proposed, borrowing from style transfer literature, that improves the state-of-the-art in terms of traditional distribution quality metrics, leads to demonstrably better interpolation properties, and also better disentangles the latent factors of variation. Expand
Learning to Compose Domain-Specific Transformations for Data Augmentation
TLDR
The proposed method can make use of arbitrary, non-deterministic transformation functions, is robust to misspecified user input, and is trained on unlabeled data, which can be used to perform data augmentation for any end discriminative model. Expand
Style encoding for class-specific image generation
A fundamental problem in employing deep learning algorithms in the medical field is the lack of labeled data and severe class imbalance. In this work, we present novel ways to enlarge small scaleExpand
...
1
2
3
4
5
...