Corpus ID: 227126804

Iterative Text-based Editing of Talking-heads Using Neural Retargeting

@article{Yao2020IterativeTE,
  title={Iterative Text-based Editing of Talking-heads Using Neural Retargeting},
  author={Xin-Wei Yao and Ohad Fried and K. Fatahalian and Maneesh Agrawala},
  journal={ArXiv},
  year={2020},
  volume={abs/2011.10688}
}
We present a text-based tool for editing talking-head video that enables an iterative editing workflow. On each iteration users can edit the wording of the speech, further refine mouth motions if necessary to reduce artifacts and manipulate non-verbal aspects of the performance by inserting mouth gestures (e.g. a smile) or changing the overall performance style (e.g. energetic, mumble). Our tool requires only 2-3 minutes of the target actor video and it synthesizes the video for each iteration… Expand

References

SHOWING 1-10 OF 53 REFERENCES
Text-based editing of talking-head video
Content-based tools for editing audio stories
End-to-End Speech-Driven Facial Animation with Temporal GANs
Video Rewrite: driving visual speech with audio
Realistic Speech-Driven Facial Animation with GANs
Tools for placing cuts and transitions in interview video
Transferable videorealistic speech animation
VDub: Modifying Face Video of Actors for Plausible Visual Alignment to a Dubbed Audio Track
Neural style-preserving visual dubbing
...
1
2
3
4
5
...