• Corpus ID: 235421596

TransGAN: Two Pure Transformers Can Make One Strong GAN, and That Can Scale Up

@inproceedings{Jiang2021TransGANTP,
  title={TransGAN: Two Pure Transformers Can Make One Strong GAN, and That Can Scale Up},
  author={Yi-fan Jiang and Shiyu Chang and Zhangyang Wang},
  booktitle={NeurIPS},
  year={2021}
}
The recent explosive interest on transformers has suggested their potential to become powerful “universal" models for computer vision tasks, such as classification, detection, and segmentation. While those attempts mainly study the discriminative models, we explore transformers on some more notoriously difficult vision tasks, e.g., generative adversarial networks (GANs). Our goal is to conduct the first pilot study in building a GAN completely free of convolutions, using only pure transformer… 

Figures and Tables from this paper

M ETA -GAN FOR F EW -S HOT I MAGE G ENERATION
TLDR
This study adapts two common meta-learning algorithms from few-shot classification– Model-Agnostic Meta-Learning and Reptile–to GANs, meta-training the generator and discriminator to learn an optimal weight initialization such that fine-tuning on a new task is rapid.
Penetration Multilayer Overload Signal Generation Based on TransGAN
  • Anqi Fang, Rong Li
  • Engineering, Computer Science
    Journal of Physics: Conference Series
  • 2022
TLDR
Experimental results show that the TransGAN-based penetration multi-layer overload signal generation method can generate effective overload data with a different number of layers, which can address the issue of the lack of penetration multilayer overload signals to a certain extent.
TTS-GAN: A Transformer-based Time-Series Generative Adversarial Network
TLDR
TTS-GAN is introduced, a transformer-based GAN which can successfully generate realistic synthetic time-series data sequences of arbitrary length, similar to the real ones, using a pure transformer encoder architecture.
Countering Malicious DeepFakes: Survey, Battleground, and Horizon
TLDR
A comprehensive overview and detailed analysis of the research work on the topic of DeepFake generation, DeepFake detection as well as evasion of Deepfake detection, with more than 318 research papers carefully surveyed is provided.
Anti-Oversmoothing in Deep Vision Transformers via the Fourier Domain Analysis: From Theory to Practice
TLDR
This paper establishes a rigorous theory framework to analyze ViT features from the Fourier spectrum domain, and shows that the self-attention mechanism inherently amounts to a low-pass filter, which indicates when ViT scales up its depth, excessive low- pass filtering will cause feature maps to only preserve their Direct-Current component.
Simulating financial time series using attention
Financial time series simulation is a central topic since it extends the limited real data for training and evaluation of trading strategies. It is also challenging because of the complex statistical
Time series anomaly detection method based on autoencoder and HMM
  • Computer Science
  • 2022
Anomaly detection based on multivariate time-series correlation data collected in real-time during the process is one of the key aspects to prevent industrial process accidents and ensure system
CTrGAN: Cycle Transformers GAN for Gait Transfer
TLDR
This work introduces a novel model, Cycle Transformers GAN (CTrGAN), that can successfully generate the target’s natural gait, and demonstrates that this approach is capable of producing over an order of magnitude more realistic personalized gaits than existing methods, even when used with sources that were not available during training.
SPI-GAN: Distilling Score-based Generative Models with Straight-Path Interpolations
TLDR
An enhanced distillation method, called straight-path interpolation GAN (SPI-GAN), which can be compared to the state-of-the-art shortcut-based distillation methods, and is one of the best models in terms of the sampling quality/diversity/time for CIFAR-10, CelebA-HQ-256, and LSUN-Church-256.
TTS-CGAN: A Transformer Time-Series Conditional GAN for Biosignal Data Augmentation
TLDR
It is demonstrated that TTS-CGAN generated synthetic data are similar to real data, and that the model performs better than the other state-of-the-art GAN models built for time-series data GAN model and verified its performance on multiple image synthesis tasks.
...
...

References

SHOWING 1-10 OF 85 REFERENCES
LSUN: Construction of a Large-scale Image Dataset using Deep Learning with Humans in the Loop
TLDR
This work proposes to amplify human effort through a partially automated labeling scheme, leveraging deep learning with humans in the loop, and constructs a new image dataset, LSUN, which contains around one million labeled images for each of 10 scene categories and 20 object categories.
Learning Multiple Layers of Features from Tiny Images
TLDR
It is shown how to train a multi-layer generative model that learns to extract meaningful features which resemble those found in the human visual cortex, using a novel parallelization algorithm to distribute the work among multiple machines connected on a network.
Nyströmformer: A Nyström-Based Algorithm for Approximating Self-Attention
TLDR
This work proposes Nyströmformer - a model that exhibits favorable scalability as a function of sequence length and performs favorably relative to other efficient self-attention methods.
An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale
TLDR
Vision Transformer (ViT) attains excellent results compared to state-of-the-art convolutional networks while requiring substantially fewer computational resources to train.
Analyzing and Improving the Image Quality of StyleGAN
TLDR
This work redesigns the generator normalization, revisit progressive growing, and regularize the generator to encourage good conditioning in the mapping from latent codes to images, and thereby redefines the state of the art in unconditional image modeling.
Progressive Growing of GANs for Improved Quality, Stability, and Variation
TLDR
A new training methodology for generative adversarial networks is described, starting from a low resolution, and adding new layers that model increasingly fine details as training progresses, allowing for images of unprecedented quality.
Differentiable augmentation for dataefficient gan training
  • arXiv preprint arXiv:2006.10738,
  • 2020
Colorization Transformer
TLDR
The Colorization Transformer is presented, a novel approach for diverse high fidelity image colorization based on self-attention that outperforms the previous state-of-the-art on colorising ImageNet based on FID results and based on a human evaluation in a Mechanical Turk test.
Taming Transformers for High-Resolution Image Synthesis
TLDR
It is demonstrated how combining the effectiveness of the inductive bias of CNNs with the expressivity of transformers enables them to model and thereby synthesize high-resolution images.
COCO-GAN: Generation by Parts via Conditional Coordinating
TLDR
COnditional COordinate GAN (COCO-GAN) of which the generator generates images by parts based on their spatial coordinates as the condition and the discriminator learns to justify realism across multiple assembled patches by global coherence, local appearance, and edge-crossing continuity is proposed.
...
...