Stephan C. Meylan

Learn More
Changes in language processing and production accompanying aging have most commonly been interpreted as evidence for age-related cognitive decline. A recent proposal (Ramscar et al., 2014) challenges that interpretation, asserting instead that such changes emerge as a consequence of—and in order to support—processes of lifelong learning like continued(More)
Word frequencies in natural language follow a highly skewed Zipfian distribution, but the consequences of this distribution for language acquisition are only beginning to be understood. Typically, learning experiments that are meant to simulate language acquisition use uniform word frequency distributions. We examine the effects of Zipfian distributions(More)
Word frequencies in natural language follow a Zipfian distribution. Artificial language experiments that are meant to simulate language acquisition generally use uniform word frequency distributions, however. In the present study we examine whether a Zipfian frequency distribution influences adult learners’ word segmentation performance. Using two(More)
In the classic telephone game, the content of a spoken message evolves as it passes from player to player. Beyond its entertainment value, the telephone game may have considerable scientific utility: Here we investigate the nature of the linguistic knowledge people use to comprehend language by tracking the evolution of a set of visually-presented sentences(More)
The inverse relationship between the length of a word and the frequency of its use, first identified by G.K. Zipf in 1935, is a classic empirical law that holds across a wide range of human languages. We demonstrate that length is one aspect of a much more general property of words: how distinctive they are with respect to other words in a language.(More)
Vector-space models of semantics represent words as continuously-valued vectors and measure similarity based on the distance or angle between those vectors. Such representations have become increasingly popular due to the recent development of methods that allow them to be efficiently estimated from very large amounts of data. However, the idea of relating(More)
Lexical dependencies abound in natural language: words tend to follow particular words or word categories. However, artificial language learning experiments exploring word segmentation have so far lacked such structure. In the present study, we explore whether simple inter-word dependencies influence the word segmentation performance of adult learners. We(More)
How do children begin to use language to say things they have never heard before? The origins of linguistic productivity have been a subject of heated debate: Whereas generativist accounts posit that children's early language reflects the presence of syntactic abstractions, constructivist approaches instead emphasize gradual generalization derived from(More)
Current computational models of word learning make use of correspondences between words and observed referents, but as of yet cannot—as human learners do—leverage information regarding the meaning of other words in the lexicon. Here we develop a Bayesian framework for word learning that learns a lexicon from multiword utterances. In a set of three(More)
The English definite and indefinite articles (also known as determiners) are a useful index of early morphosyntactic productivity in children’s speech, and give evidence about children’s representation of syntactic abstractions. Previous work (i.e. Pine & Lieven, 1997) used a measure of productivity that shows a strong sensitivity to sample size and does(More)