Sensorimotor Representation of Speech Perception. Cross-Decoding of Place of Articulation Features during Selective Attention to Syllables in 7T fMRI

@article{ArchilaMelndez2018SensorimotorRO,
  title={Sensorimotor Representation of Speech Perception. Cross-Decoding of Place of Articulation Features during Selective Attention to Syllables in 7T fMRI},
  author={Mario E. Archila-Mel{\'e}ndez and Giancarlo Valente and Joao M. Correia and Rob P. W. Rouhl and Vivianne H van Kranen-Mastenbroek and Bernadette M. Jansma},
  journal={eNeuro},
  year={2018},
  volume={5}
}
Abstract Sensorimotor integration, the translation between acoustic signals and motoric programs, may constitute a crucial mechanism for speech. During speech perception, the acoustic-motoric translations include the recruitment of cortical areas for the representation of speech articulatory features, such as place of articulation. Selective attention can shape the processing and performance of speech perception tasks. Whether and where sensorimotor integration takes place during attentive… 

Figures from this paper

Formant Space Reconstruction From Brain Activity in Frontal and Temporal Regions Coding for Heard Vowels
TLDR
Results revealed that phonological information organizes around formant structure during the perception of vowels, and call for a degree of interdependence based on acoustic information, between the frontal and temporal ends of the language network.
Electrophysiological Dynamics of Visual Speech Processing and the Role of Orofacial Effectors for Cross-Modal Predictions
TLDR
This study tested whether speech sounds can be predicted on the basis of visemic information only, and to what extent interfering with orofacial articulatory effectors can affect these predictions, and found that interfering with the motor articulatory system strongly disrupted cross-modal predictions.
The motor system’s [modest] contribution to speech perception
TLDR
A small effect of articulatory suppression, in particular, on psychometric function thresholds is indicated, suggesting, at best, a minor modulatory role of the speech motor system in perception.
What Acoustic Studies Tell Us About Vowels in Developing and Disordered Speech.
TLDR
Vowels are important to speech intelligibility, intrinsically dynamic, refined in both perceptual and productive aspects beyond the age typically given for their phonetic mastery, and played a role in speech rhythm and prosody.
Combining Gamma With Alpha and Beta Power Modulation for Enhanced Cortical Mapping in Patients With Focal Epilepsy
TLDR
It is concluded that the combination of gamma and beta power modulation during cognitive testing can contribute to the identification of eloquent areas prior to ESM in patients with refractory focal epilepsy.
CEREBRUM-7T: fast and fully-volumetric brain segmentation of out-of-the-scanner 7T MR volumes
TLDR
CEREBRUM-7T is the first example of DL architecture directly applied on 7T data with the purpose of segmentation, an optimised end-to-end CNN architecture that allows to segment a whole 7T T1w MRI brain volume at once, without the need of partitions it into 2D or 3D tiles.
CEREBRUM‐7T: Fast and Fully Volumetric Brain Segmentation of 7 Tesla MR Volumes
TLDR
CEREBRUM‐7T is presented, an optimised end‐to‐end convolutional neural network, which allows fully automatic segmentation of a whole 7T T1w MRI brain volume at once, without partitioning the volume, pre‐processing, nor aligning it to an atlas.
Invariance of neural representations cannot be validly inferred from neuroimaging decoding studies: Empirical and computational evidence
TLDR
In a functional MRI study with human participants of both sexes, it is shown that the cross-classification test produces false positives, in many cases leading to the conclusion that orientation is encoded invariantly from spatial position, and that spatial position is encode invariant from orientation, in primary visual cortex.

References

SHOWING 1-10 OF 67 REFERENCES
Decoding Articulatory Features from fMRI Responses in Dorsal Speech Regions
TLDR
The role of articulatory representations during passive listening is examined using carefully controlled stimuli (spoken syllables) in combination with multivariate fMRI decoding and revealed articulatory-specific brain responses of speech at multiple cortical levels, including auditory, sensorimotor, and motor regions, suggesting the representation of sensorsimotor information during passive speech perception.
Motor cortex maps articulatory features of speech sounds
TLDR
Sound-related somatotopic activation in precentral gyrus shows that, during speech perception, specific motor circuits are recruited that reflect phonetic distinctive features of the speech sounds encountered, thus providing direct neuroimaging support for specific links between the phonological mechanisms for speech perception and production.
Magnetic Brain Response Mirrors Extraction of Phonological Features from Spoken Vowels
TLDR
Results suggest that both N100m latency and source location as well as their interaction reflect properties of speech stimuli that correspond to abstract phonological features.
Distributed Neural Representations of Phonological Features during Speech Perception
TLDR
Functional magnetic resonance imaging and multivoxel pattern analysis are used to investigate the distributed patterns of activation that are associated with the categorical and perceptual similarity structure of 16 consonant exemplars in the English language used in Miller and Nicely's (1955) classic study of acoustic confusability.
Is the Sensorimotor Cortex Relevant for Speech Perception and Understanding? An Integrative Review
TLDR
It is concluded that frontoparietal cortices, including ventral motor and somatosensory areas, reflect phonological information during speech perception and exert a causal influence on language understanding.
The neuroanatomical and functional organization of speech perception
Hierarchical organization of speech perception in human auditory cortex
TLDR
A multi-stage hierarchical stream for speech sound processing extending ventrolaterally from the superior temporal plane to the superiorporal sulcus is suggested, where neurons code for increasingly more complex spectrotemporal features.
Phonetic Feature Encoding in Human Superior Temporal Gyrus
TLDR
High-density direct cortical surface recordings in humans while they listened to natural, continuous speech were used to reveal the STG representation of the entire English phonetic inventory, demonstrating the acoustic-phonetic representation of speech in human STG.
...
...