Corpus ID: 218613823

FaR-GAN for One-Shot Face Reenactment

@article{Hao2020FaRGANFO,
  title={FaR-GAN for One-Shot Face Reenactment},
  author={Hanxiang Hao and Sriram Baireddy and Amy R. Reibman and Edward J. Delp},
  journal={ArXiv},
  year={2020},
  volume={abs/2005.06402}
}
  • Hanxiang Hao, Sriram Baireddy, +1 author Edward J. Delp
  • Published 2020
  • Computer Science
  • ArXiv
  • Animating a static face image with target facial expressions and movements is important in the area of image editing and movie production. This face reenactment process is challenging due to the complex geometry and movement of human faces. Previous work usually requires a large set of images from the same person to model the appearance. In this paper, we present a one-shot face reenactment model, FaR-GAN, that takes only one face image of any given source identity and a target expression as… CONTINUE READING

    Figures, Tables, and Topics from this paper.

    Explore Further: Topics Discussed in This Paper

    References

    Publications referenced by this paper.
    SHOWING 1-10 OF 33 REFERENCES

    One-shot Face Reenactment

    VIEW 6 EXCERPTS
    HIGHLY INFLUENTIAL

    ReenactGAN: Learning to Reenact Faces via Boundary Transfer

    VIEW 2 EXCERPTS

    Effective face frontalization in unconstrained images

    VIEW 1 EXCERPT

    FSGAN: Subject Agnostic Face Swapping and Reenactment

    VIEW 1 EXCERPT

    VGGFace2: A Dataset for Recognising Faces across Pose and Age

    VIEW 1 EXCERPT

    Image-to-Image Translation with Conditional Adversarial Networks

    VIEW 2 EXCERPTS

    Deferred neural rendering

    VIEW 1 EXCERPT