TransMoMo: Invariance-Driven Unsupervised Video Motion Retargeting

@article{Yang2020TransMoMoIU,
  title={TransMoMo: Invariance-Driven Unsupervised Video Motion Retargeting},
  author={Zhuoqian Yang and W. Zhu and W. Wu and Chen Qian and Qiang Zhou and B. Zhou and Chen Change Loy},
  journal={2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
  year={2020},
  pages={5305-5314}
}
  • Zhuoqian Yang, W. Zhu, +4 authors Chen Change Loy
  • Published 2020
  • Computer Science
  • 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)
  • We present a lightweight video motion retargeting approach TransMoMo that is capable of transferring motion of a person in a source video realistically to another video of a target person. Without using any paired data for supervision, the proposed method can be trained in an unsupervised manner by exploiting invariance properties of three orthogonal factors of variation including motion, structure, and view-angle. Specifically, with loss functions carefully derived based on invariance, we… CONTINUE READING

    Figures, Tables, and Topics from this paper.

    Zero-shot Synthesis with Group-Supervised Learning
    Zero-shot Synthesis with Group-Supervised Learning

    References

    Publications referenced by this paper.
    SHOWING 1-10 OF 50 REFERENCES
    Learning character-agnostic motion for motion retargeting in 2D
    15
    Neural Kinematic Networks for Unsupervised Motion Retargetting
    51
    Everybody Dance Now
    197
    Deep Video‐Based Performance Cloning
    21
    Few-shot Video-to-Video Synthesis
    40
    Decomposing Motion and Content for Natural Video Sequence Prediction
    259
    Neural Rendering and Reenactment of Human Actor Videos
    27
    Unsupervised Learning of Disentangled Representations from Video
    277