Flow-Grounded Spatial-Temporal Video Prediction from Still Images

@inproceedings{Li2018FlowGroundedSV,
  title={Flow-Grounded Spatial-Temporal Video Prediction from Still Images},
  author={Yijun Li and Chen Fang and Jimei Yang and Zhaowen Wang and Xin Lu and Ming-Hsuan Yang},
  booktitle={ECCV},
  year={2018}
}
  • Yijun Li, Chen Fang, +3 authors Ming-Hsuan Yang
  • Published in ECCV 2018
  • Computer Science
  • Existing video prediction methods mainly rely on observing multiple historical frames or focus on predicting the next one-frame. In this work, we study the problem of generating consecutive multiple future frames by observing one single still image only. We formulate the multi-frame prediction task as a multiple time step flow (multi-flow) prediction phase followed by a flow-to-frame synthesis phase. The multi-flow prediction is modeled in a variational probabilistic manner with spatial… CONTINUE READING

    Figures, Tables, and Topics from this paper.

    Citations

    Publications citing this paper.
    SHOWING 1-10 OF 51 CITATIONS

    Tgt . ( a ) GT sequence ( b ) ImagineFlow ( c ) Backward warping Before After

    VIEW 10 EXCERPTS
    CITES BACKGROUND & METHODS
    HIGHLY INFLUENCED

    Unsupervised Bi-directional Flow-based Video Generation from one Snapshot

    VIEW 4 EXCERPTS
    CITES BACKGROUND
    HIGHLY INFLUENCED

    Towards Image-to-Video Translation: A Structure-Aware Approach via Multi-stage Generative Adversarial Networks

    VIEW 6 EXCERPTS
    CITES METHODS & BACKGROUND
    HIGHLY INFLUENCED

    High-Quality Video Generation from Static Structural Annotations

    VIEW 20 EXCERPTS
    CITES BACKGROUND & METHODS
    HIGHLY INFLUENCED

    Video Generation From Single Semantic Label Map

    • Junting Pan, Chengyu Wang, +4 authors X. Wang
    • Computer Science
    • 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)
    • 2019
    VIEW 3 EXCERPTS
    CITES BACKGROUND
    HIGHLY INFLUENCED

    ImaGINator: Conditional Spatio-Temporal GAN for Video Generation

    VIEW 1 EXCERPT
    CITES BACKGROUND

    Few-shot Video-to-Video Synthesis

    VIEW 1 EXCERPT
    CITES BACKGROUND

    Animating landscape

    VIEW 17 EXCERPTS
    CITES BACKGROUND & METHODS
    HIGHLY INFLUENCED

    Unsupervised Keypoint Learning for Guiding Class-Conditional Video Prediction

    VIEW 6 EXCERPTS
    CITES BACKGROUND & METHODS
    HIGHLY INFLUENCED

    FILTER CITATIONS BY YEAR

    2018
    2020

    CITATION STATISTICS

    • 11 Highly Influenced Citations

    • Averaged 17 Citations per year from 2018 through 2020

    References

    Publications referenced by this paper.
    SHOWING 1-10 OF 47 REFERENCES

    Dual Motion GAN for Future-Flow Embedded Video Prediction

    VIEW 1 EXCERPT

    Video Frame Synthesis Using Deep Voxel Flow

    VIEW 1 EXCERPT

    Dense Optical Flow Prediction from a Static Image

    VIEW 3 EXCERPTS

    Synthesizing Dynamic Patterns by Spatial-Temporal Generative ConvNet

    VIEW 1 EXCERPT

    MoCoGAN: Decomposing Motion and Content for Video Generation

    VIEW 1 EXCERPT