SLAMP: Stochastic Latent Appearance and Motion Prediction

  title={SLAMP: Stochastic Latent Appearance and Motion Prediction},
  author={Adil Kaan Akan and Erkut Erdem and Aykut Erdem and Fatma Guney},
  journal={2021 IEEE/CVF International Conference on Computer Vision (ICCV)},
Motion is an important cue for video prediction and often utilized by separating video content into static and dynamic components. Most of the previous work utilizing motion is deterministic but there are stochastic methods that can model the inherent uncertainty of the future. Existing stochastic models either do not reason about motion explicitly or make limiting assumptions about the static part. In this paper, we reason about appearance and motion in the video stochastically by predicting… 
Stochastic Video Prediction with Structure and Motion
It is demonstrated that disentangling structure and motion helps stochastic video prediction, leading to better future predictions in complex driving scenarios on two real-world driving datasets, KITTI and Cityscapes.
StretchBEV: Stretching Future Instance Prediction Spatially and Temporally
This work addresses inherent uncertainty in future predictions with a stochastic temporal model and obtains more diverse future predictions that are also more accurate compared to previous work, especially stretching both spatially further regions in the scene and temporally over longer time horizons.
MCVD: Masked Conditional Video Diffusion for Prediction, Generation, and Interpolation
This work devise a general-purpose framework called Masked Conditional Video Diffusion (MCVD) for all of these video synthesis tasks using a probabilistic conditional score-based denoising diffusion model, conditioned on past and/or future frames.


Stochastic Latent Residual Video Prediction
This paper introduces a novel stochastic temporal model whose dynamics are governed in a latent space by a residual update rule and naturally models video dynamics as it allows this simpler, more interpretable, latent model to outperform prior state-of-the-art methods on challenging datasets.
Stochastic Variational Video Prediction
This paper develops a stochastic variational video prediction (SV2P) method that predicts a different possible future for each sample of its latent variables, and is the first to provide effective Stochastic multi-frame prediction for real-world video.
Flow-Grounded Spatial-Temporal Video Prediction from Still Images
This work forms the multi-frame prediction task as a multiple time step flow (multi-flow) prediction phase followed by a flow-to-frame synthesis phase, which prevents the model from directly looking at the high-dimensional pixel space of the frame sequence and is demonstrated to be more effective in predicting better and diverse results.
High Fidelity Video Prediction with Large Stochastic Recurrent Neural Networks
This work proposes a different approach: finding minimal inductive bias for video prediction while maximizing network capacity, and investigates this question by performing the first large-scale empirical study and demonstrates state-of-the-art performance by learning large models on three different datasets.
Dense Optical Flow Prediction from a Static Image
This work presents a convolutional neural network (CNN) based approach for motion prediction that outperform all previous approaches by large margins and can predict future optical flow on a diverse set of scenarios.
Disentangling Propagation and Generation for Video Prediction
A computational model for high-fidelity video prediction which disentangles motion-specific propagation from motion-agnostic generation and introduces a confidence-aware warping operator which gates the output of pixel predictions from a flow predictor for non-occluded regions and from a context encoder for occluded regions.
Unsupervised Learning of Object Structure and Dynamics from Videos
A keypoint-based image representation is adopted and a stochastic dynamics model of the keypoints is learned that outperforms unstructured representations on a range of motion-related tasks such as object tracking, action recognition and reward prediction.
Improved Conditional VRNNs for Video Prediction
This work proposes a hierarchy of latent variables, which defines a family of flexible prior and posterior distributions in order to better model the probability of future sequences and validate the proposal through a series of ablation experiments.
Stochastic Video Generation with a Learned Prior
An unsupervised video generation model that learns a prior model of uncertainty in a given environment and generates video frames by drawing samples from this prior and combining them with a deterministic estimate of the future frame.
Dual Motion GAN for Future-Flow Embedded Video Prediction
A dual motion Generative Adversarial Net architecture is developed, which learns to explicitly enforce future-frame predictions to be consistent with the pixel-wise flows in the video through a duallearning mechanism.