• Corpus ID: 219559390

Super-resolution Variational Auto-Encoders

@article{Gatopoulos2020SuperresolutionVA,
  title={Super-resolution Variational Auto-Encoders},
  author={Ioannis Gatopoulos and Maarten Stol and Jakub M. Tomczak},
  journal={ArXiv},
  year={2020},
  volume={abs/2006.05218}
}
The framework of variational autoencoders (VAEs) provides a principled method for jointly learning latent-variable models and corresponding inference models. However, the main drawback of this approach is the blurriness of the generated images. Some studies link this effect to the objective function, namely, the (negative) log-likelihood. Here, we propose to enhance VAEs by adding a random variable that is a downscaled version of the original image and still use the log-likelihood function as… 
Diverse super-resolution with pretrained deep hiererarchical VAEs
TLDR
The ability of the reusing VD-VAE, a state-of-the art variational autoencoder, to generate diverse solutions to the super- resolution problem on face super-resolution with upsampling factors × 4, × 8 and × 16 is demonstrated.
Image Super-Resolution With Deep Variational Autoencoders
TLDR
VDVAE-SR is introduced, a new model that aims to exploit the most recent deep VAE methodologies to improve upon image super-resolution using transfer learning on pretrained VDVAEs and is competitive with other state-of-the-art methods.
Optimizing Few-Shot Learning Based on Variational Autoencoders
TLDR
A generative approach using variational autoencoders (VAEs), which can be used specifically to optimize few-shot learning tasks by generating new samples with more intra-class variations on the Labeled Faces in the Wild dataset is suggested.
Self-Supervised Variational Auto-Encoders
TLDR
A novel class of generative models, called self-supervised Variational Auto-Encoder (selfVAE), which utilizes deterministic and discrete transformations of data and allows both conditional and unconditional sampling while simplifying the objective function.

References

SHOWING 1-10 OF 43 REFERENCES
GANs Trained by a Two Time-Scale Update Rule Converge to a Local Nash Equilibrium
TLDR
This work proposes a two time-scale update rule (TTUR) for training GANs with stochastic gradient descent on arbitrary GAN loss functions and introduces the "Frechet Inception Distance" (FID) which captures the similarity of generated images to real ones better than the Inception Score.
Density estimation using Real NVP
TLDR
This work extends the space of probabilistic models using real-valued non-volume preserving (real NVP) transformations, a set of powerful invertible and learnable transformations, resulting in an unsupervised learning algorithm with exact log-likelihood computation, exact sampling, exact inference of latent variables, and an interpretable latent space.
Density estimation using real nvp, 2016
  • 2016
Residual Flows for Invertible Generative Modeling
TLDR
The resulting approach, called Residual Flows, achieves state-of-the-art performance on density estimation amongst flow-based models, and outperforms networks that use coupling blocks at joint generative and discriminative modeling.
Integer Discrete Flows and Lossless Compression
TLDR
This work introduces a flow-based generative model for ordinal discrete data called Integer Discrete Flow (IDF): a bijective integer map that can learn rich transformations on high-dimensional data and introduces a flexible transformation layer called integer discrete coupling.
Skeletal descriptions of shape provide unique perceptual information for object recognition
TLDR
This work tested whether the human visual system incorporates a three-dimensional skeletal descriptor of shape to determine an object’s identity, and showed that a model of skeletal similarity explained the greatest amount of variance in participants’ object dissimilarity judgments when compared with other computational models of visual similarity.
BIVA: A Very Deep Hierarchy of Latent Variables for Generative Modeling
TLDR
This paper introduces the Bidirectional-Inference Variational Autoencoder (BIVA), characterized by a skip-connected generative model and an inference network formed by a bidirectional stochastic inference path, and shows that BIVA reaches state-of-the-art test likelihoods, generates sharp and coherent natural images, and uses the hierarchy of latent variables to capture different aspects of the data distribution.
Flow++: Improving Flow-Based Generative Models with Variational Dequantization and Architecture Design
TLDR
Flow++ is proposed, a new flow-based model that is now the state-of-the-art non-autoregressive model for unconditional density estimation on standard image benchmarks, and has begun to close the significant performance gap that has so far existed between autoregressive models and flow- based models.
Image Super-Resolution Using Very Deep Residual Channel Attention Networks
TLDR
This work proposes a residual in residual (RIR) structure to form very deep network, which consists of several residual groups with long skip connections, and proposes a channel attention mechanism to adaptively rescale channel-wise features by considering interdependencies among channels.
Glow: Generative Flow with Invertible 1x1 Convolutions
TLDR
Glow, a simple type of generative flow using an invertible 1x1 convolution, is proposed, demonstrating that a generative model optimized towards the plain log-likelihood objective is capable of efficient realistic-looking synthesis and manipulation of large images.
...
...