• Corpus ID: 251979620

Bayesian Mixed Multidimensional Scaling for Auditory Processing

@inproceedings{Rebaudo2022BayesianMM,
  title={Bayesian Mixed Multidimensional Scaling for Auditory Processing},
  author={Giovanni Rebaudo and Fernando Llanos and Bharath Chandrasekaran and Abhra Sarkar},
  year={2022}
}
Speech sounds subtly differ on a multidimensional auditory-perceptual space. Distinguishing speech sound categories is a perceptually demanding task, with large-scale individual differences as well as inter-population (e.g., native versus non-native listeners) heterogeneity. The neural representational differences underlying the inter-individual and cross-language differences are not completely understood. These questions have often been examined using joint analyses that ignore the individual… 

References

SHOWING 1-10 OF 38 REFERENCES

A distributed dynamic brain network mediates linguistic tone representation and categorization

Emerging Native-Similar Neural Representations Underlie Non-Native Speech Category Learning Success

Findings provide important insights into the experience-dependent representational neuroplasticity underlying successful speech learning in adulthood and could be leveraged in designing individualized feedback-based training paradigms that maximize learning efficacy.

Separate Neural Processing of Timbre Dimensions in Auditory Sensory Memory

Results expand to timbre dimensions a property of separation of the representation in sensory memory that has already been reported between basic perceptual attributes (pitch, loudness, duration, and location) of sound sources.

Acoustic correlates of timbre space dimensions: a confirmatory study using synthetic tones.

Listeners presented with carefully controlled synthetic tones use attack time, spectral centroid, and spectrum fine structure in dissimilarity rating experiments, and spectral flux appears as a less salient timbre parameter, its salience depending on the number of other dimensions varying concurrently in the stimulus set.

Phonetic Feature Encoding in Human Superior Temporal Gyrus

High-density direct cortical surface recordings in humans while they listened to natural, continuous speech were used to reveal the STG representation of the entire English phonetic inventory, demonstrating the acoustic-phonetic representation of speech in human STG.

Efficient Coding and Statistically Optimal Weighting of Covariance among Acoustic Attributes in Novel Sounds

Overall, simple strength of the principal correlation is inadequate to predict listener performance and reduced perceptual dimensionality for speech perception and plausible neural substrates is discussed.

Neuroplasticity in the processing of pitch dimensions: a multidimensional scaling analysis of the mismatch negativity.

The MMN can serve as an index of pitch features that are differentially weighted depending on a listener's experience with lexical tones and their acoustic correlates within a particular tone space.

Dynamic Encoding of Acoustic Features in Neural Responses to Continuous Speech

Electroencephalography responses to continuous speech are characterized by obtaining the time-locked responses to phoneme instances (phoneme-related potential), and it is found that each instance of a phoneme in continuous speech produces multiple distinguishable neural responses occurring as early as 50 ms and as late as 400 ms after the phoneme onset.

Tone perception in Far Eastern languages.