Left Auditory Cortex and Amygdala, but Right Insula Dominance for Human Laughing and Crying

@article{Sander2005LeftAC,
  title={Left Auditory Cortex and Amygdala, but Right Insula Dominance for Human Laughing and Crying},
  author={Kerstin Sander and Henning Scheich},
  journal={Journal of Cognitive Neuroscience},
  year={2005},
  volume={17},
  pages={1519-1531}
}
  • K. SanderH. Scheich
  • Published 1 October 2005
  • Psychology, Biology
  • Journal of Cognitive Neuroscience
Evidence suggests that in animals their own species-specific communication sounds are processed predominantly in the left hemisphere. In contrast, processing linguistic aspects of human speech involves the left hemisphere, whereas processing some prosodic aspects of speech as well as other not yet well-defined attributes of human voices predominantly involves the right hemisphere. This leaves open the question of hemispheric processing of universal (species-specific) human vocalizations that… 

Hearing emotional sounds: category representation in the human amygdala

Results showed that the amygdala responded more to human vocalization than to animal vocalization and sounds of inanimate objects in both negative and neutral valences, and more to animal sounds than to objects in neural condition.

An Auditory Region in the Primate Insular Cortex Responding Preferentially to Vocal Communication Sounds

These findings characterize the caudal insula as a selectively responding auditory region, possibly part of a processing stream involved in the representation of communication sounds, and uncover a basis for a supposed role of the insula in processing vocal communication sounds such as speech.

FMRI activations of amygdala, cingulate cortex, and auditory cortex by infant laughing and crying

The gender‐dependent correlates of neural activity in amygdala and ACC may reflect neural predispositions in women for responses to preverbal infant vocalizations, whereas the gender‐independent similarity of activation patterns in PCC and AC may reflect more sensory‐based and cognitive levels of neural processing.

Semiotic aspects of human nonverbal vocalizations: a functional imaging study

Using a decision task, hemodynamic responses to affective bursts, vegetative sounds, and vocal gestures were measured by means of functional magnetic resonance imaging and suggested a left-hemisphere temporo-parietal ‘auditory-to-meaning interface’ related to the mechanisms of speech processing.

Revealing the functions of supra-temporal and insular auditory responsive areas in humans

These findings suggest that even the InsP has similar basic auditory response properties to the primary or non-primary cortex, it may not directly participate in the formation of auditory perception.

Functional But Not Structural Networks of the Human Laryngeal Motor Cortex Show Left Hemispheric Lateralization during Syllable But Not Breathing Production

Bilateral organization of functional LMC networks during controlled breathing supports its indispensible role in all types of laryngeal behaviors, and significant left-hemispheric lateralization of functional networks during simple but highly learned voice production suggests the readiness of the LMC network for production of a complex voluntary behavior, such as human speech.

Hearing others’ pain: neural activity related to empathy

The brain regions involved in hearing others’ pain are similar to those activated in the empathic processing of visual stimuli, which emphasise the modulating role of interindividual differences in affective empathy.
...

References

SHOWING 1-10 OF 58 REFERENCES

Audition of laughing and crying leads to right amygdala activation in a low-noise fMRI setting.

Auditory perception of laughing and crying activates human amygdala regardless of attentional state.

Human temporal lobe activation by speech and nonspeech sounds.

Recording of blood oxygenation signals from the temporal lobes of normal volunteers using functional magnetic resonance imaging indicates functional subdivision of the human lateral temporal cortex and provides a preliminary framework for understanding the cortical processing of speech sounds.

Defining a left-lateralized response specific to intelligible speech using fMRI.

The results demonstrate that there are neural responses to intelligible speech along the length of the left lateral temporal neocortex, although the precise processing roles of the anterior and posterior regions cannot be determined from this study.

Left hemisphere dominance for processing vocalizations in adult, but not infant, rhesus monkeys: field experiments.

  • M. HauserK. Andersson
  • Psychology, Biology
    Proceedings of the National Academy of Sciences of the United States of America
  • 1994
Like humans, adult rhesus monkeys also evidence left hemisphere dominance for processing species-specific vocalizations, however, the emergence of such asymmetry may depend on both differential maturation of the two hemispheres and experience with the species-typical vocal repertoire.

Temporal lobe regions engaged during normal speech comprehension.

The implicit comprehension of simple narrative speech with listening to reversed versions of the narratives was contrasted, demonstrating that normal comprehension, free of task demands that do not form part of everyday discourse, engages regions distributed between the two temporal lobes, more widely on the left.

Left hemisphere advantage in the mouse brain for recognizing ultrasonic communication calls

In the house mouse, which has a very much less elaborate forebrain than man or macaque monkey, the ultrasonic calls that are emitted by young mice to evoke maternal caring behaviour are preferentially recognized by the left hemisphere, suggesting that lateralization of this function evolved early in mammals.

Adaptation to speaker's voice in right anterior temporal lobe

Only one cortical region, located in the anterior part of the right superior temporal sulcus (STS), responded differently to the two conditions: activation relative to the silent baseline was significantly reduced when syllables were spoken by a single voice than when they were speaking by different voices.

Dynamic Brain Activation during Processing of Emotional Intonation: Influence of Acoustic Parameters, Emotional Valence, and Sex

Functional magnetic resonance imaging was used to disentangle brain activation associated with extraction of specific acoustic cues and detection of specific emotional states and might prove helpful in reconciling the controversial previous clinical and experimental data.

Neural lateralization of species-specific vocalizations by Japanese macaques (Macaca fuscata).

The results suggest that Japanese macaques engage left-hemisphere processors for the analysis of communicatively significant sounds that are analogous to the lateralized mechanisms used by humans listening to speech.
...