Deep Video-Based Performance Cloning

@article{Aberman2019DeepVP,
  title={Deep Video-Based Performance Cloning},
  author={Kfir Aberman and Mingyi Shi and Jing Liao and Dani Lischinski and Baoquan Chen and Daniel Cohen-Or},
  journal={Comput. Graph. Forum},
  year={2019},
  volume={38},
  pages={219-233}
}
  • Kfir Aberman, Mingyi Shi, +3 authors Daniel Cohen-Or
  • Published 2019
  • Computer Science
  • Comput. Graph. Forum
  • We present a new video-based performance cloning technique. [...] Key Method Our generative model is realized as a deep neural network with two branches, both of which train the same space-time conditional generator, using shared weights. One branch, responsible for learning to generate the appearance of the target actor in various poses, uses \emph{paired} training data, self-generated from the reference video.Expand Abstract

    Figures, Tables, and Topics from this paper.

    Citations

    Publications citing this paper.
    SHOWING 1-10 OF 16 CITATIONS

    Video-to-Video Synthesis

    VIEW 1 EXCERPT
    CITES METHODS

    TransMoMo: Invariance-Driven Unsupervised Video Motion Retargeting

    VIEW 2 EXCERPTS
    CITES BACKGROUND

    Neural Human Video Rendering: Joint Learning of Dynamic Textures and Rendering-to-Video Translation

    VIEW 5 EXCERPTS
    CITES BACKGROUND & METHODS
    HIGHLY INFLUENCED

    GAC-GAN: A General Method for Appearance-Controllable Human Video Motion Transfer

    VIEW 4 EXCERPTS
    CITES BACKGROUND & METHODS
    HIGHLY INFLUENCED

    Learning character-agnostic motion for motion retargeting in 2D

    VIEW 5 EXCERPTS
    CITES METHODS & BACKGROUND

    Everybody Dance Now

    VIEW 1 EXCERPT
    CITES BACKGROUND

    Do As I Do: Transferring Human Motion and Appearance between Monocular Videos with Spatial and Temporal Constraints

    VIEW 3 EXCERPTS
    CITES BACKGROUND & METHODS

    References

    Publications referenced by this paper.
    SHOWING 1-10 OF 44 REFERENCES

    MoCoGAN: Decomposing Motion and Content for Video Generation

    VIEW 1 EXCERPT

    The Pose Knows: Video Forecasting by Generating Pose Futures

    Coherent Online Video Style Transfer

    Temporal Generative Adversarial Nets with Singular Value Clipping

    Disentangled Person Image Generation

    High-Resolution Image Synthesis and Semantic Manipulation with Conditional GANs

    VIEW 5 EXCERPTS
    HIGHLY INFLUENTIAL

    A Variational U-Net for Conditional Appearance and Shape Generation