Out of Time: Automated Lip Sync in the Wild

@inproceedings{Chung2016OutOT,
  title={Out of Time: Automated Lip Sync in the Wild},
  author={Joon Son Chung and Andrew Zisserman},
  booktitle={ACCV Workshops},
  year={2016}
}
The goal of this work is to determine the audio-video synchronisation between mouth motion and speech in a video. We propose a two-stream ConvNet architecture that enables a similarity metric between the sound and the mouth images to be learnt from unlabelled data. The trained network is used to determine the lip-sync error in a video. We apply the network to two further tasks: active speaker detection and lip reading. On both tasks we set a new state-of-the-art on standard benchmark datasets. 
Highly Influential
This paper has highly influenced 14 other papers. REVIEW HIGHLY INFLUENTIAL CITATIONS
Highly Cited
This paper has 66 citations. REVIEW CITATIONS
46 Citations
28 References
Similar Papers

Citations

Publications citing this paper.
Showing 1-10 of 46 extracted citations

67 Citations

0204060201620172018
Citations per Year
Semantic Scholar estimates that this publication has 67 citations based on the available data.

See our FAQ for additional information.

Similar Papers

Loading similar papers…