Learn More
A key mechanism in the organization of turns at talk in conversation is the ability to anticipate or PROJECT the moment of completion of a current speaker's turn. Some authors suggest that this is achieved via lexicosyntactic cues, while others argue that projection is based on intonational contours. We tested these hypotheses in an on-line experiment,(More)
The pronunciation of the same word may vary considerably as a consequence of its context. The Dutch word tuin (English, garden) may be pronounced tuim if followed by bank (English, bench), but not if followed by stoel (English, chair). In a series of four experiments, we examined how Dutch listeners cope with this context sensitivity in their native(More)
OBJECTIVE Ample behavioral evidence suggests that distributional properties of the language environment influence the processing of speech. Yet, how these characteristics are reflected in neural processes remains largely unknown. The present ERP study investigates neurophysiological correlates of phonotactic probability: the distributional frequency of(More)
Two experiments examined how Dutch listeners deal with the effects of connected-speech processes, specifically those arising from word-final /t/ reduction (e.g., whether Dutch [tas] is tas, bag, or a reduced-/t/ version of tast, touch). Eye movements of Dutch participants were tracked as they looked at arrays containing 4 printed words, each associated with(More)
The lexical and phonetic mapping of auditorily confusable L2 nonwords was examined by teaching L2 learners novel words and by later examining their word recognition using an eyetracking paradigm. During word learning, two groups of highly proficient Dutch learners of English learned 20 English nonwords of which 10 contained the English contrast /ε/-/ae/ (a(More)
Understanding foreign speech is difficult, in part because of unusual mappings between sounds and words. It is known that listeners in their native language can use lexical knowledge (about how words ought to sound) to learn how to interpret unusual speech-sounds. We therefore investigated whether subtitles, which provide lexical information, support(More)
Listeners tune in to talkers' vowels through extrinsic normalization. We asked here whether this process could be based on compensation for the long-term average spectrum (LTAS) of preceding sounds and whether the mechanisms responsible for normalization are indifferent to the nature of those sounds. If so, normalization should apply to nonspeech stimuli.(More)
This study reports a shadowing experiment, in which one has to repeat a speech stimulus as fast as possible. We tested claims about a direct link between perception and production based on speech gestures, and obtained two types of counterevidence. First, shadowing is not slowed down by a gestural mismatch between stimulus and response. Second, phonetic(More)