Brianna L. Conrey

Learn More
Previous research has identified a "synchrony window" of several hundred milliseconds over which auditory-visual (AV) asynchronies are not reliably perceived. Individual variability in the size of this AV synchrony window has been linked with variability in AV speech perception measures, but it was not clear whether AV speech perception measures are related(More)
Two experiments were conducted to examine the temporal limitations on the detection of asynchrony in auditory-visual (AV) signals. Each participant made asynchrony judgments about speech and nonspeech signals presented over an 800-ms range of AV onset asynchronies. Consistent with previous findings, all conditions revealed a wide window of several hundred(More)
This study examined how correlated, or filtered, noise affected efficiency for recognizing two types of signal patterns, Gabor patches and three-dimensional objects. In general, compared with the ideal observer, human observers were most efficient at performing tasks in low-pass noise, followed by white noise; they were least efficient in high-pass noise.(More)
The identification of the gender of an unfamiliar talker is an easy and automatic process for naïve adult listeners. Sociolinguistic research has consistently revealed gender differences in the production of linguistic variables. Research on the perception of dialect variation, however, has been limited almost exclusively to male talkers. In the present(More)
Native speakers of a language are often unable to consciously perceive, and have altered neural responses to, phonemic contrasts not present in their language. This study examined whether speakers of dialects of the same language with different phoneme inventories also show measurably different neural responses to contrasts not present in their dialect.(More)
This study investigated the “intersensory temporal synchrony window” [1] for audiovisual (AV) signals. A speeded asynchrony detection task was used to measure each participant’s temporal synchrony window for speech and nonspeech signals over an 800-ms range of AV asynchronies. Across three sets of stimuli, the video-leading threshold for asynchrony(More)
Normal-hearing observers typically have some ability to "lipread," or understand visual-only speech without an accompanying auditory signal. However, talkers vary in how easy they are to lipread. Such variability could arise from differences in the visual information available in talkers' speech, human perceptual strategies that are better suited to some(More)
The ability to perceive and understand visual-only speech and the benefit experienced from having both auditory and visual signals available during speech perception tasks varies widely in the normal-hearing population. At the present time, little is known about the underlying neural mechanisms responsible for this variability or the possible relationships(More)
Measures of immediate memory span were obtained from 25 normal-hearing adults who listened to an 8-channel, frequency shifted acoustic simulation of a cochlear implant. A short period of digit identification training was conducted before forward and backward digit spans were obtained under both processed and unprocessed conditions. As expected, forward and(More)
  • 1