Audio-visual speech perception off the top of the head.

  title={Audio-visual speech perception off the top of the head.},
  author={Chris Davis and Jeesun Kim},
  volume={100 3},
The study examined whether people can extract speech related information from the talker's upper face that was presented using either normally textured videos (Experiments 1 and 3) or videos showing only the outlined of the head (Experiments 2 and 4). Experiments 1 and 2 used within- and cross-modal matching tasks. In the within-modal task, observers were presented two pairs of short silent video clips that showed the top part of a talker's head. In the cross-modal task, pairs of audio and… CONTINUE READING