Corpus ID: 236447387

Segmentation in Style: Unsupervised Semantic Image Segmentation with Stylegan and CLIP

@article{Pakhomov2021SegmentationIS,
  title={Segmentation in Style: Unsupervised Semantic Image Segmentation with Stylegan and CLIP},
  author={Daniil Pakhomov and Sanchit Hira and Narayani Wagle and Kemar E. Green and Nassir Navab},
  journal={ArXiv},
  year={2021},
  volume={abs/2107.12518}
}
We introduce a method that allows to automatically segment images into semantically meaningful regions without human supervision. The derived regions are consistent across different images and coincide with human-defined semantic classes on some datasets. The method is particularly useful in cases where the labelling and definition of semantic regions pose a challenge for humans. In our work, we use pretrained StyleGAN2 [8] generative model: clustering in the feature space of the generative… Expand
CLIP-Forge: Towards Zero-Shot Text-to-Shape Generation
TLDR
This work presents a simple yet effective method for zeroshot text-to-shape generation based on a two-stage training process, which only depends on an unlabelled shape dataset and a pre-trained image-text network such as CLIP. Expand
CLOOB: Modern Hopfield Networks with InfoLOOB Outperform CLIP
TLDR
CLOOB consistently outperforms CLIP at zero-shot transfer learning across all considered architectures and datasets and is compared after learning on the Conceptual Captions and the YFCC dataset with respect to their zero- shot transfer learning performance on other datasets. Expand
LAFITE: Towards Language-Free Training for Text-to-Image Generation
TLDR
The first work to train text-to-image generation models without any text data is proposed, which leverages the well-aligned multimodal semantic space of the powerful pre-trained CLIP model and can be applied in fine-tuning pretrained models, which saves both training time and cost. Expand

References

SHOWING 1-10 OF 34 REFERENCES
DatasetGAN: Efficient Labeled Data Factory with Minimal Human Effort
TLDR
This work introduces DatasetGAN: an automatic procedure to generate massive datasets of high-quality semantically segmented images requiring minimal human effort and is on par with fully supervised methods, which in some cases require as much as 100x more annotated data as the method. Expand
Semantic Segmentation with Generative Models: Semi-Supervised Learning and Strong Out-of-Domain Generalization
TLDR
This paper proposes a novel framework for discriminative pixel-level tasks using a generative model of both images and labels that captures the joint image-label distribution and is trained efficiently using a large set of un-labeled images supplemented with only few labeled ones. Expand
DeepLab: Semantic Image Segmentation with Deep Convolutional Nets, Atrous Convolution, and Fully Connected CRFs
TLDR
This work addresses the task of semantic image segmentation with Deep Learning and proposes atrous spatial pyramid pooling (ASPP), which is proposed to robustly segment objects at multiple scales, and improves the localization of object boundaries by combining methods from DCNNs and probabilistic graphical models. Expand
The Cityscapes Dataset for Semantic Urban Scene Understanding
TLDR
This work introduces Cityscapes, a benchmark suite and large-scale dataset to train and test approaches for pixel-level and instance-level semantic labeling, and exceeds previous attempts in terms of dataset size, annotation richness, scene variability, and complexity. Expand
Learning Transferable Visual Models From Natural Language Supervision
TLDR
It is demonstrated that the simple pre-training task of predicting which caption goes with which image is an efficient and scalable way to learn SOTA image representations from scratch on a dataset of 400 million (image, text) pairs collected from the internet. Expand
StyleCLIP: Text-Driven Manipulation of StyleGAN Imagery
TLDR
This work explores leveraging the power of recently introduced Contrastive Language-Image Pre-training (CLIP) models in order to develop a text-based interface for StyleGAN image manipulation that does not require such manual effort. Expand
Pyramid Scene Parsing Network
TLDR
This paper exploits the capability of global context information by different-region-based context aggregation through the pyramid pooling module together with the proposed pyramid scene parsing network (PSPNet) to produce good quality results on the scene parsing task. Expand
Analyzing and Improving the Image Quality of StyleGAN
TLDR
This work redesigns the generator normalization, revisit progressive growing, and regularize the generator to encourage good conditioning in the mapping from latent codes to images, and thereby redefines the state of the art in unconditional image modeling. Expand
MaskGAN: Towards Diverse and Interactive Facial Image Manipulation
TLDR
This work proposes a novel framework termed MaskGAN, enabling diverse and interactive face manipulation, and finds that semantic masks serve as a suitable intermediate representation for flexible face manipulation with fidelity preservation. Expand
A Style-Based Generator Architecture for Generative Adversarial Networks
TLDR
An alternative generator architecture for generative adversarial networks is proposed, borrowing from style transfer literature, that improves the state-of-the-art in terms of traditional distribution quality metrics, leads to demonstrably better interpolation properties, and also better disentangles the latent factors of variation. Expand
...
1
2
3
4
...