• Corpus ID: 13743462

Towards Deeper Generative Architectures for GANs using Dense connections

@article{Tripathi2018TowardsDG,
  title={Towards Deeper Generative Architectures for GANs using Dense connections},
  author={Samarth Tripathi and Renbo Tu},
  journal={ArXiv},
  year={2018},
  volume={abs/1804.11031}
}
In this paper, we present the result of adopting skip connections and dense layers, previously used in image classification tasks, in the Fisher GAN implementation. We have experimented with different numbers of layers and inserting these connections in different sections of the network. Our findings suggests that networks implemented with the connections produce better images than the baseline, and the number of connections added has only slight effect on the result. 

Figures from this paper

White-Light Interference Microscopy Image Super-Resolution Using Generative Adversarial Networks
TLDR
The IISR model has been proven to restore LR images to HR images, and comparative experiments prove that the proposed model achieves better visual quality than other models, preserving more realistic details.

References

SHOWING 1-4 OF 4 REFERENCES
Densely Connected Convolutional Networks
TLDR
The Dense Convolutional Network (DenseNet), which connects each layer to every other layer in a feed-forward fashion, and has several compelling advantages: they alleviate the vanishing-gradient problem, strengthen feature propagation, encourage feature reuse, and substantially reduce the number of parameters.
Improved Techniques for Training GANs
TLDR
This work focuses on two applications of GANs: semi-supervised learning, and the generation of images that humans find visually realistic, and presents ImageNet samples with unprecedented resolution and shows that the methods enable the model to learn recognizable features of ImageNet classes.
Deep Residual Learning for Image Recognition
TLDR
This work presents a residual learning framework to ease the training of networks that are substantially deeper than those used previously, and provides comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth.
Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks
TLDR
This work introduces a class of CNNs called deep convolutional generative adversarial networks (DCGANs), that have certain architectural constraints, and demonstrates that they are a strong candidate for unsupervised learning.