Corpus ID: 227126804

Iterative Text-based Editing of Talking-heads Using Neural Retargeting

@article{Yao2020IterativeTE,
  title={Iterative Text-based Editing of Talking-heads Using Neural Retargeting},
  author={Xin-Wei Yao and Ohad Fried and K. Fatahalian and Maneesh Agrawala},
  journal={ArXiv},
  year={2020},
  volume={abs/2011.10688}
}
We present a text-based tool for editing talking-head video that enables an iterative editing workflow. On each iteration users can edit the wording of the speech, further refine mouth motions if necessary to reduce artifacts and manipulate non-verbal aspects of the performance by inserting mouth gestures (e.g. a smile) or changing the overall performance style (e.g. energetic, mumble). Our tool requires only 2-3 minutes of the target actor video and it synthesizes the video for each iteration… Expand

References

SHOWING 1-10 OF 53 REFERENCES
Text-based editing of talking-head video
  • 68
  • PDF
Content-based tools for editing audio stories
  • 60
  • PDF
End-to-End Speech-Driven Facial Animation with Temporal GANs
  • 58
  • PDF
Video Rewrite: driving visual speech with audio
  • 694
  • Highly Influential
  • PDF
Realistic Speech-Driven Facial Animation with GANs
  • 49
  • PDF
Tools for placing cuts and transitions in interview video
  • 76
  • PDF
Transferable videorealistic speech animation
  • 56
  • PDF
VDub: Modifying Face Video of Actors for Plausible Visual Alignment to a Dubbed Audio Track
  • 85
  • PDF
Neural style-preserving visual dubbing
  • 21
  • PDF
...
1
2
3
4
5
...