Learn More
The left inferior frontal gyrus (LIFG, BA 44, 45, 47) has been associated with linguistic processing (from sentence-to syllable-parsing) as well as action analysis. We hypothesize that the function of the LIFG may be the monitoring of action, a function well adapted to agent deixis (verbal pointing at the agent of an action). The aim of this fMRI study was(More)
In a recent paper in this journal, Speech Communication 41, 221–231] display a robust asymmetry effect in vowel discrimination, present in infants as well as adults. They interpret this effect as a preference for peripheral vowels, providing an anchor for comparison. We discuss their data in the framework of the Dispersion–Focalisation Theory of vowel(More)
A new articulatory model GENTIANE — elaborated from an X-ray film built on a corpus of VCV sequences performed by a skilled French speaker — enabled us to analyse coarticulation of main consonant types in vowel contexts from a degrees of freedom approach. The data displayed an overall coarticulatory versatility, except for an absolute invariance in the(More)
We used functional magnetic resonance imaging (fMRI) to localize the brain areas involved in the imagery analogue of the verbal transformation effect, that is, the perceptual changes that occur when a speech form is cycled in rapid and continuous mental repetition. Two conditions were contrasted: a baseline condition involving the simple mental repetition(More)
In most cases authors are permitted to post their version of the article (e.g. in Word or Tex form) to their personal website or institutional repository. Authors requiring further information regarding Elsevier's archiving and manuscript policies are encouraged to visit:
Perceptual changes are experienced during rapid and continuous repetition of a speech form, leading to an auditory illusion known as the verbal transformation effect. Although verbal transformations are considered to reflect mainly the perceptual organization and interpretation of speech, the present study was designed to test whether or not speech(More)
Why does audio [b] give more [d] percepts with visual [g] than with visual [d], as in the present classical McGurk experiment ? An explanation given for this asymmetry could be a language bias towards [d]. As concerns the visual information, and contrary to what is sometimes taken as granted in part of the lipreading literature, visual [g] does not give(More)
The purpose of this contribution is to improve our knowledge about the time course of visual and auditory perception with regard to the representation of sound types as different in their phenomenological format as vowels and glides. Our results on the perception of Vowel-to-Vowel gesture via the production of epenthetic glides in between – according to our(More)
The modeling of anticipatory coarticulation has been the subject of longstanding debates for more than 40 yr. Empirical investigations in the articulatory domain have converged toward two extreme modeling approaches: a maximal anticipation behavior (Look-ahead model) or a fixed pattern (Time-locked model). However, empirical support for any of these models(More)