Animating landscape

@article{Endo2019AnimatingL,
  title={Animating landscape},
  author={Yuki Endo and Yoshihiro Kanamori and Shigeru Kuriyama},
  journal={ACM Transactions on Graphics (TOG)},
  year={2019},
  volume={38},
  pages={1 - 19}
}
Automatic generation of a high-quality video from a single image remains a challenging task despite the recent advances in deep generative models. This paper proposes a method that can create a high-resolution, long-term animation using convolutional neural networks (CNNs) from a single landscape image where we mainly focus on skies and waters. Our key observation is that the motion (e.g., moving clouds) and appearance (e.g., time-varying colors in the sky) in natural scenes have different time… Expand
1 Citations
DeepLandscape: Adversarial Modeling of Landscape Videos
TLDR
A new model of landscape videos that can be trained on a mixture of static landscape images as well as landscape animations and which produces more compelling animations of given photographs than previously proposed methods is built. Expand

References

SHOWING 1-10 OF 91 REFERENCES
Visual Dynamics: Probabilistic Future Frame Synthesis via Cross Convolutional Networks
TLDR
A novel approach that models future frames in a probabilistic manner is proposed, namely a Cross Convolutional Network to aid in synthesizing future frames; this network structure encodes image and motion information as feature maps and convolutional kernels, respectively. Expand
Sky is not the limit
TLDR
An automatic background replacement algorithm that can generate realistic, artifact-free images with a diverse styles of skies by utilizing visual semantics to guide the entire process including sky segmentation, search and replacement is proposed. Expand
Generating Videos with Scene Dynamics
TLDR
A generative adversarial network for video with a spatio-temporal convolutional architecture that untangles the scene's foreground from the background is proposed that can generate tiny videos up to a second at full frame rate better than simple baselines. Expand
Dense Optical Flow Prediction from a Static Image
TLDR
This work presents a convolutional neural network (CNN) based approach for motion prediction that outperform all previous approaches by large margins and can predict future optical flow on a diverse set of scenarios. Expand
High-Resolution Image Synthesis and Semantic Manipulation with Conditional GANs
TLDR
A new method for synthesizing high-resolution photo-realistic images from semantic label maps using conditional generative adversarial networks (conditional GANs) is presented, which significantly outperforms existing methods, advancing both the quality and the resolution of deep image synthesis and editing. Expand
Deep multi-scale video prediction beyond mean square error
TLDR
This work trains a convolutional network to generate future frames given an input sequence and proposes three different and complementary feature learning strategies: a multi-scale architecture, an adversarial training method, and an image gradient difference loss function. Expand
A Phase‐Based Approach for Animating Images Using Video Examples
TLDR
This work proposes an Eulerian phase‐based approach which uses the phase information from the sample video to animate the static image, and demonstrates that this simple, phase-based approach for transferring small motion is more effective at animating still images than methods which rely on optical flow. Expand
Controllable Video Generation with Sparse Trajectories
TLDR
This work presents a conditional video generation model that allows detailed control over the motion of the generated video and proposes a training paradigm that calculate trajectories from video clips, which eliminated the need of annotated training data. Expand
Flow-Grounded Spatial-Temporal Video Prediction from Still Images
TLDR
This work forms the multi-frame prediction task as a multiple time step flow (multi-flow) prediction phase followed by a flow-to-frame synthesis phase, which prevents the model from directly looking at the high-dimensional pixel space of the frame sequence and is demonstrated to be more effective in predicting better and diverse results. Expand
Deep Photo Style Transfer
TLDR
This paper introduces a deep-learning approach to photographic style transfer that handles a large variety of image content while faithfully transferring the reference style and constrain the transformation from the input to the output to be locally affine in colorspace. Expand
...
1
2
3
4
5
...