Corpus ID: 220042272

Learning Disentangled Representations of Video with Missing Data

@article{Massague2020LearningDR,
  title={Learning Disentangled Representations of Video with Missing Data},
  author={Armand Comas Massague and Chi Zhang and Z. Feric and O. Camps and Rose Yu},
  journal={ArXiv},
  year={2020},
  volume={abs/2006.13391}
}
Missing data poses significant challenges while learning representations of video sequences. We present Disentangled Imputed Video autoEncoder (DIVE), a deep generative model that imputes and predicts future video frames in the presence of missing data. Specifically, DIVE introduces a missingness latent variable, disentangles the hidden video representations into static and dynamic appearance, pose, and missingness factors for each object, while it imputes each object trajectory where data is… Expand
Meta-Learning Dynamics Forecasting Using Task Inference

References

SHOWING 1-10 OF 45 REFERENCES
Learning to Decompose and Disentangle Representations for Video Prediction
Unsupervised Learning of Disentangled Representations from Video
Compositional Video Prediction
Unsupervised Learning of Disentangled and Interpretable Representations from Sequential Data
Deep multi-scale video prediction beyond mean square error
Stochastic Variational Video Prediction
Decomposing Motion and Content for Natural Video Sequence Prediction
Stochastic Video Generation with a Learned Prior
DYAN: A Dynamical Atoms-Based Network for Video Prediction
Video Pixel Networks
...
1
2
3
4
5
...