Corpus ID: 3353110

Image Transformer

@article{Parmar2018ImageT,
  title={Image Transformer},
  author={Niki Parmar and Ashish Vaswani and Jakob Uszkoreit and Lukasz Kaiser and Noam M. Shazeer and Alexander Ku and Dustin Tran},
  journal={ArXiv},
  year={2018},
  volume={abs/1802.05751}
}
Image generation has been successfully cast as an autoregressive sequence generation or transformation problem. [...] Key Result In a human evaluation study, we show that our super-resolution models improve significantly over previously published autoregressive super-resolution models. Images they generate fool human observers three times more often than the previous state of the art.Expand
302 Citations
Attention Augmented Convolutional Networks
Combining Transformer Generators with Convolutional Discriminators
ViViT: A Video Vision Transformer
Self-Attention Generative Adversarial Networks
Locally Masked Convolution for Autoregressive Models
High-Fidelity Pluralistic Image Completion with Transformers
SCRAM: Spatially Coherent Randomized Attention Maps
Complementary, Heterogeneous and Adversarial Networks for Image-to-Image Translation
Dual Contrastive Loss and Attention for GANs
...
1
2
3
4
5
...

References

SHOWING 1-10 OF 25 REFERENCES
Conditional Image Generation with PixelCNN Decoders
Generative Image Modeling Using Spatial LSTMs
PixelSNAIL: An Improved Autoregressive Generative Model
Photo-Realistic Single Image Super-Resolution Using a Generative Adversarial Network
  • C. Ledig, Lucas Theis, +6 authors W. Shi
  • Computer Science, Mathematics
  • 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)
  • 2017
Attention is All you Need
StackGAN: Text to Photo-Realistic Image Synthesis with Stacked Generative Adversarial Networks
Generating Images from Captions with Attention
Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks
BEGAN: Boundary Equilibrium Generative Adversarial Networks
...
1
2
3
...