Distant melodies: statistical learning of nonadjacent dependencies in tone sequences.

Abstract

Human listeners can keep track of statistical regularities among temporally adjacent elements in both speech and musical streams. However, for speech streams, when statistical regularities occur among nonadjacent elements, only certain types of patterns are acquired. Here, using musical tone sequences, the authors investigate nonadjacent learning. When the elements were all similar in pitch range and timbre, learners acquired moderate regularities among adjacent tones but did not acquire highly consistent regularities among nonadjacent tones. However, when elements differed in pitch range or timbre, learners acquired statistical regularities among the similar, but temporally nonadjacent, elements. Finally, with a moderate grouping cue, both adjacent and nonadjacent statistics were learned, indicating that statistical learning is governed not only by temporal adjacency but also by Gestalt principles of similarity.

Extracted Key Phrases

9 Figures and Tables

0102030'05'06'07'08'09'10'11'12'13'14'15'16'17
Citations per Year

167 Citations

Semantic Scholar estimates that this publication has 167 citations based on the available data.

See our FAQ for additional information.

Cite this paper

@article{Creel2004DistantMS, title={Distant melodies: statistical learning of nonadjacent dependencies in tone sequences.}, author={Sarah C. Creel and Elissa L. Newport and Richard N. Aslin}, journal={Journal of experimental psychology. Learning, memory, and cognition}, year={2004}, volume={30 5}, pages={1119-30} }