Stephen McCullough

Learn More
A [(15)O]water PET experiment was conducted to investigate the neural regions engaged in processing constructions unique to signed languages: classifier predicates in which the position of the hands in signing space schematically represents spatial relations among objects. Ten deaf native signers viewed line drawings depicting a spatial relation between two(More)
Positron emission tomography was used to investigate whether the motor-iconic basis of certain forms in American Sign Language (ASL) partially alters the neural systems engaged during lexical retrieval. Most ASL nouns denoting tools and ASL verbs referring to tool-based actions are produced with a handshape representing the human hand holding a tool and(More)
Rather than specifying spatial relations with a closed-class set of prepositions, American Sign Language (ASL) encodes spatial relations using space itself via classifier constructions. In these constructions, handshape morphemes specify object type, and the position of the hands in signing space schematically represents the spatial relation between(More)
Recognition of emotional facial expressions is universal for all humans, but signed language users must also recognize certain non-affective facial expressions as linguistic markers. fMRI was used to investigate the neural systems underlying recognition of these functionally distinct expressions, comparing deaf ASL signers and hearing nonsigners. Within the(More)
Positron emission tomography was used to investigate whether signed languages exhibit the same neural organization for lexical retrieval within classical and non-classical language areas as has been described for spoken English. Ten deaf native American sign language (ASL) signers were shown pictures of unique entities (famous persons) and non-unique(More)
Bimodal bilinguals are hearing individuals who know both a signed and a spoken language. Effects of bimodal bilingualism on behavior and brain organization are reviewed, and an fMRI investigation of the recognition of facial expressions by ASL-English bilinguals is reported. The fMRI results reveal separate effects of sign language and spoken language(More)
Can linguistic semantics affect neural processing in feature-specific visual regions? Specifically, when we hear a sentence describing a situation that includes motion, do we engage neural processes that are part of the visual perception of motion? How about if a motion verb was used figuratively, not literally? We used fMRI to investigate whether semantic(More)
We examined word-level reading circuits in skilled deaf readers whose primary language is American Sign Language, and hearing readers matched for reading ability (college level). During fMRI scanning, participants performed a semantic decision (concrete concept?), a phonological decision (two syllables?), and a false-font control task (string underlined?).(More)
Previous research indicates that motion-sensitive brain regions are engaged when comprehending motion semantics expressed by words or sentences. Using fMRI, we investigated whether such neural modulation can occur when the linguistic signal itself is visually dynamic and motion semantics is expressed by movements of the hands. Deaf and hearing users of(More)