• Corpus ID: 46891249

Generative Adversarial Forests for Better Conditioned Adversarial Learning

@article{Zuo2018GenerativeAF,
  title={Generative Adversarial Forests for Better Conditioned Adversarial Learning},
  author={Yan Zuo and Gil Avraham and Tom Drummond},
  journal={ArXiv},
  year={2018},
  volume={abs/1805.05185}
}
In recent times, many of the breakthroughs in various vision-related tasks have revolved around improving learning of deep models; these methods have ranged from network architectural improvements such as Residual Networks, to various forms of regularisation such as Batch Normalisation. In essence, many of these techniques revolve around better conditioning, allowing for deeper and deeper models to be successfully learned. In this paper, we look towards better conditioning Generative… 

Figures and Tables from this paper

Parallel Optimal Transport GAN
TLDR
This work empirically shows that the regulariser achieves a stabilising effect which leads to higher quality of generated samples and increased mode coverage of the given data distribution, resulting in more stable GAN training.
Traversing Latent Space using Decision Ferns
TLDR
This work shows how a constructed latent space can be explored in a controlled manner and argues that this complements well founded inference methods, and uses a Variational Autoencoder to construct an end-to-end trainable framework.
A survey of deep network techniques all classifiers can adopt
TLDR
The current state of the art for deep learning classifier technologies that are being used outside of deep neural networks are reviewed, and directions that can be pursued to expand the area of deep learning for a variety of classification algorithms are discussed.
A Survey of Deep Networks Techniques All Classifiers
TLDR
The current state of the art for deep learning classifier technologies that are being used outside of deep neural networks are reviewed, and directions that can be pursued to expand the area of deep learning for a variety of classification algorithms are discussed.
Brief Overview of Deep Neural Networks
TLDR
The current state of the art for deep learning classifier technologies that are being used outside of deep neural networks are reviewed and directions that can be pursued to expand the area of deep learning for a variety of classification algorithms are discussed.
2019 2 Brief Overview of Deep Neural Networks
TLDR
The current state of the art for deep learning classifier technologies that are being used outside of deep neural networks are reviewed and directions that can be pursued to expand the area of deep learning for a variety of classification algorithms are discussed.
A Survey of Techniques All Classifiers Can Learn from Deep Networks: Models, Optimizations, and Regularization
TLDR
The current state of the art for deep learning classifier technologies that are being used outside of deep neural networks are reviewed and directions that can be pursued to expand the area of deep learning for a variety of classification algorithms are discussed.

References

SHOWING 1-10 OF 40 REFERENCES
Stabilizing Training of Generative Adversarial Networks through Regularization
TLDR
This work proposes a new regularization approach with low computational cost that yields a stable GAN training procedure and demonstrates the effectiveness of this regularizer accross several architectures trained on common benchmark image generation tasks.
Improved Training of Wasserstein GANs
TLDR
This work proposes an alternative to clipping weights: penalize the norm of gradient of the critic with respect to its input, which performs better than standard WGAN and enables stable training of a wide variety of GAN architectures with almost no hyperparameter tuning.
Mode Regularized Generative Adversarial Networks
TLDR
This work introduces several ways of regularizing the objective, which can dramatically stabilize the training of GAN models and shows that these regularizers can help the fair distribution of probability mass across the modes of the data generating distribution, during the early phases of training and thus providing a unified solution to the missing modes problem.
Improved Techniques for Training GANs
TLDR
This work focuses on two applications of GANs: semi-supervised learning, and the generation of images that humans find visually realistic, and presents ImageNet samples with unprecedented resolution and shows that the methods enable the model to learn recognizable features of ImageNet classes.
Energy-based Generative Adversarial Network
We introduce the "Energy-based Generative Adversarial Network" model (EBGAN) which views the discriminator as an energy function that attributes low energies to the regions near the data manifold and
Generative Adversarial Nets
We propose a new framework for estimating generative models via an adversarial process, in which we simultaneously train two models: a generative model G that captures the data distribution, and a
Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks
TLDR
This work introduces a class of CNNs called deep convolutional generative adversarial networks (DCGANs), that have certain architectural constraints, and demonstrates that they are a strong candidate for unsupervised learning.
ABC-GAN : Adaptive Blur and Control for improved training stability of Generative Adversarial Networks
TLDR
Two simple techniques for improving the stability, training speed and image quality of GANs are proposed and it is shown that filtering the inputs of the discriminator with a blur kernel allows for increased image resolution and a significant quality improvement.
Generating images with recurrent adversarial networks
TLDR
This work proposes a recurrent generative model that can be trained using adversarial training to generate very good image samples, and proposes a way to quantitatively compare adversarial networks by having the generators and discriminators of these networks compete against each other.
NIPS 2016 Tutorial: Generative Adversarial Networks
TLDR
This report summarizes the tutorial presented by the author at NIPS 2016 on generative adversarial networks (GANs), and describes state-of-the-art image models that combine GANs with other methods.
...
...