Speech-driven head motion synthesis using neural networks

  title={Speech-driven head motion synthesis using neural networks},
  author={Chuang Ding and Pengcheng Zhu and Lei Xie and Dongmei Jiang and Zhong-Hua Fu},
This paper presents a neural network approach for speech-driven head motion synthesis, which can automatically predict a speaker’s head movement from his/her speech. Specifically, we realize speech-to-head-motion mapping by learning a multi-layer perceptron from audio-visual broadcast news data. First, we show that a generatively pre-trained neural network significantly outperforms a randomly initialized network and the hidden Markov model (HMM) approach. Second, we demonstrate that the feature… CONTINUE READING


Publications referenced by this paper.
Showing 1-10 of 29 references

Recent advances in deep learning for speech research at Microsoft

2013 IEEE International Conference on Acoustics, Speech and Signal Processing • 2013
View 9 Excerpts
Highly Influenced

Automatic head motion prediction from speech data

View 5 Excerpts
Highly Influenced

A Fast Learning Algorithm for Deep Belief Nets

Neural Computation • 2006
View 13 Excerpts
Highly Influenced

Natural head motion synthesis driven by acoustic prosodic features

Journal of Visualization and Computer Animation • 2005
View 11 Excerpts
Highly Influenced

The HTK book (for HTK version 3.4)

S. Young, G. Evermann, +7 authors D. Povey
Cambridge university engineering department, vol. 2, no. 2, pp. 2–3, 2006. • 2006
View 2 Excerpts
Highly Influenced

Speech driven talking head from estimated articulatory features

2014 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) • 2014
View 1 Excerpt

Similar Papers

Loading similar papers…