Marshall R. Mayberry

Learn More
Evidence from numerous studies using the visual world paradigm has revealed both that spoken language can rapidly guide attention in a related visual scene and that scene information can immediately influence comprehension processes. These findings motivated the coordinated interplay account (Knoeferle & Crocker, 2006) of situated comprehension, which(More)
Empirical evidence demonstrating that sentence meaning is rapidly reconciled with the visual environment has been broadly construed as supporting the seamless interaction of visual and linguistic representations during situated comprehension. Based on recent behavioral and neuroscientific findings, however, we argue for the more deeply rooted coordination(More)
Subsymbolic systems have been successfully used to model several aspects of human language processing. Subsymbolic parsers are appealing because they allow combining syntactic, semantic, and thematic constraints in sentence interpretation and revising that interpretation as each word is read in. These parsers are also cognitively plausible: processing is(More)
Liquid state machines have been engineered so that their dynamics hover near the “edge of chaos” [1], [2], where memory and representational capacity of the liquid were shown to be optimized. Previous work found the critical line between ordered and chaotic dynamics for threshold gates by using an analytic method similar to finding Lyapunov(More)
Subsymbolic systems have been successfully used to model several aspects of human language processing. Such parsers are appealing because they allow revising the interpretation as words are incrementally processed. Yet, it has been very hard to scale them up to realistic language due to training time, limited memory, and the difficulty of representing(More)
A model for lexical disambiguation is presented that is based on combining the frequencies of past contexts of ambiguous words. The frequences are encoded in the word representations and deene the words' semantics. A Simple Recurrent Network (SRN) parser combines the context frequences one word at a time, always producing the most likely interpretation of(More)
Subsymbolic systems have been successfully used to model several aspects of human language processing. Yet, it has proven difficult to scale them up to realistic language. They have limited memory capacity, long training times, and difficulty representing the wealth of linguistic structure. In this paper, a new connectionist model, InSomNet, is presented(More)