Jakob Hollenstein

Learn More
We show how to visually control acoustic speech synthesis by modelling the dependency between visual and acoustic parameters within the Hidden-Semi-Markov-Model (HSMM) based speech synthesis framework. A joint audiovisual model is trained with 3D facial marker trajectories as visual features. Since the dependencies of acoustic features on visual features(More)
  • 1