Temporal Generative Adversarial Nets with Singular Value Clipping

@article{Saito2017TemporalGA,
  title={Temporal Generative Adversarial Nets with Singular Value Clipping},
  author={Masaki Saito and Eiichi Matsumoto and Shunta Saito},
  journal={2017 IEEE International Conference on Computer Vision (ICCV)},
  year={2017},
  pages={2849-2858}
}
In this paper, we propose a generative model, Temporal Generative Adversarial Nets (TGAN), which can learn a semantic representation of unlabeled videos, and is capable of generating videos. Unlike existing Generative Adversarial Nets (GAN)-based methods that generate videos with a single generator consisting of 3D deconvolutional layers, our model exploits two different types of generators: a temporal generator and an image generator. The temporal generator takes a single latent variable as… Expand
Learning to navigate image manifolds induced by generative adversarial networks for unsupervised video generation
TLDR
Results in the studied video dataset indicate that, by employing a two-step training scheme, the recurrent part is able to learn how to coherently navigate the image manifold induced by the frames generator, thus yielding more natural-looking scenes. Expand
Non-Adversarial Video Synthesis with Learned Priors
TLDR
A novel approach that jointly optimizes the input latent space, the weights of a recurrent neural network and a generator through non-adversarial learning is developed that generates superior quality videos compared to the existing state-of-the-art methods. Expand
Exploiting video sequences for unsupervised disentangling in generative adversarial networks
TLDR
An adversarial training algorithm that exploits correlations in video to learn an image generator model with a disentangled latent space and observes that these motion attributes are face expressions, head orientation, lips and eyes movement. Expand
Recurrent Deconvolutional Generative Adversarial Networks with Application to Video Generation
TLDR
A novel model for video generation from text descriptions is proposed, which includes a recurrent deconvolutional generative adversarial network as the generator and a 3D convolutional neural network (3D-CNN) as the discriminator. Expand
Scripted Video Generation With a Bottom-Up Generative Adversarial Network
TLDR
A novel Bottom-up GAN (BoGAN) method for generating videos given a text description with a region-level loss via attention mechanism to preserve the local semantic alignment and draw details in different sub-regions of video conditioned on words which are most relevant to them is proposed. Expand
Recurrent Deconvolutional Generative Adversarial Networks with Application to Text Guided Video Generation
TLDR
A novel model for video generation from text descriptions is proposed, which includes a recurrent deconvolutional generative adversarial network as the generator and a 3D convolutional neural network (3D-CNN) as the discriminator. Expand
Paying Attention to Video Generation
Video generation is a challenging research topic which has been tackled by a variety of methods including Generative Adversarial Networks (GANs), Variational Autoencoders (VAE), optical flow andExpand
Frame Difference Generative Adversarial Networks: Clearer Contour Video Generating
TLDR
A novel model of Generative Adversarial Network (GAN) which called FDGAN to generate clear contour lines is proposed which extends to use inter-frame difference and achieves state-of-the-art performance for clarifying contours. Expand
Attentional Adversarial Variational Video Generation via Decomposing Motion and Content
TLDR
The approach is based on a video prediction model using a combination of the Variational Autoencoder and Generative Adversarial Network (VAE-GAN) to obtain the important regions of interest in a video and shows improved performance in comparison with other widely used methods. Expand
Generative adversarial networks and their variants
  • Er. Aarti
  • Computer Science
  • Generative Adversarial Networks for Image-to-Image Translation
  • 2021
TLDR
This chapter provides an introduction to GANs, deep-learning methods with an overview of some variants and applications that have benefited from them. Expand
...
1
2
3
4
5
...

References

SHOWING 1-10 OF 57 REFERENCES
Deep Generative Image Models using a Laplacian Pyramid of Adversarial Networks
TLDR
A generative parametric model capable of producing high quality samples of natural images using a cascade of convolutional networks within a Laplacian pyramid framework to generate images in a coarse-to-fine fashion. Expand
Conditional Generative Adversarial Nets
TLDR
The conditional version of generative adversarial nets is introduced, which can be constructed by simply feeding the data, y, to the generator and discriminator, and it is shown that this model can generate MNIST digits conditioned on class labels. Expand
Generative Image Modeling Using Style and Structure Adversarial Networks
TLDR
This paper factorize the image generation process and proposes Style and Structure Generative Adversarial Network, a model that is interpretable, generates more realistic images and can be used to learn unsupervised RGBD representations. Expand
Generating images with recurrent adversarial networks
TLDR
This work proposes a recurrent generative model that can be trained using adversarial training to generate very good image samples, and proposes a way to quantitatively compare adversarial networks by having the generators and discriminators of these networks compete against each other. Expand
Generative Adversarial Nets
We propose a new framework for estimating generative models via an adversarial process, in which we simultaneously train two models: a generative model G that captures the data distribution, and aExpand
Improved Techniques for Training GANs
TLDR
This work focuses on two applications of GANs: semi-supervised learning, and the generation of images that humans find visually realistic, and presents ImageNet samples with unprecedented resolution and shows that the methods enable the model to learn recognizable features of ImageNet classes. Expand
Generating Videos with Scene Dynamics
TLDR
A generative adversarial network for video with a spatio-temporal convolutional architecture that untangles the scene's foreground from the background is proposed that can generate tiny videos up to a second at full frame rate better than simple baselines. Expand
Deep multi-scale video prediction beyond mean square error
TLDR
This work trains a convolutional network to generate future frames given an input sequence and proposes three different and complementary feature learning strategies: a multi-scale architecture, an adversarial training method, and an image gradient difference loss function. Expand
Precomputed Real-Time Texture Synthesis with Markovian Generative Adversarial Networks
TLDR
Markovian Generative Adversarial Networks (MGANs) are proposed, a method for training generative networks for efficient texture synthesis that surpasses previous neural texture synthesizers by a significant margin and applies to texture synthesis, style transfer, and video stylization. Expand
Context Encoders: Feature Learning by Inpainting
TLDR
It is found that a context encoder learns a representation that captures not just appearance but also the semantics of visual structures, and can be used for semantic inpainting tasks, either stand-alone or as initialization for non-parametric methods. Expand
...
1
2
3
4
5
...