Learn More
Do the same mechanisms underlie processing of music and language? Recent investigations of this question have yielded inconsistent results. Likely factors contributing to discrepant findings are use of small samples and failure to control for individual differences in cognitive ability. We investigated the relationship between music and speech prosody(More)
Due to extensive variability in the phonetic realizations of words, there may be few or no proximal spectro-temporal cues that identify a word's onset or even its presence. Dilley and Pitt (2010) showed that the rate of context speech, distal from a to-be-recognized word, can have a sizeable effect on whether or not a word is perceived. This investigation(More)
Non-native speech differs from native speech in multiple ways. Previous research has described segmental and suprasegmental differences between native and non-native speech in terms of group averages. For example, average speaking rate for non-natives is slower than for natives. However, it is unknown whether non-native speech is also more variable than(More)
During lexical access, listeners use both signal-based and knowledge-based cues, and information from the linguistic context can affect the perception of acoustic speech information. Recent findings suggest that the various cues used in lexical access are implemented with flexibility and may be affected by information from the larger speech context. We(More)
Humans unconsciously track a wide array of distributional characteristics in their sensory environment. Recent research in spoken-language processing has demonstrated that the speech rate surrounding a target region within an utterance influences which words, and how many words, listeners hear later in that utterance. On the basis of hypotheses that(More)
Recent findings [Dilley and Pitt, 2010. Psych. Science. 21, 1664-1670] have shown that manipulating context speech rate in English can cause entire syllables to disappear or appear perceptually. The current studies tested two rate-based explanations of this phenomenon while attempting to replicate and extend these findings to another language, Russian. In(More)
A B S T R A C T Prosodic structure is often perceived as exhibiting regularities in the patterning of tone sequences or stressed syllables. Recently, prosodic regularities in the distal (non-local) context have been shown to influence the perceived prosodic constituency of syllables. Three experiments tested the nature of distal prosodic patterns(More)
Neil Armstrong insisted that his quote upon landing on the moon was misheard, and that he had said one small step for a man, instead of one small step for man. What he said is unclear in part because function words like a can be reduced and spectrally indistinguishable from the preceding context. Therefore, their presence can be ambiguous, and they may(More)
The distal prosodic patterning established at the beginning of an utterance has been shown to influence downstream word segmentation and lexical access. In this study, we investigated whether distal prosody also affects word learning in a novel (artificial) language. Listeners were exposed to syllable sequences in which the embedded words were either(More)
  • 1