Joel MacAuslan

Learn More
Artificial larynges provide a means of verbal communication for people who have either lost or are otherwise unable to use their larynges. Although they enable adequate communication, the resulting speech has an unnatural quality and is significantly less intelligible than normal speech. One of the major problems with the widely used Transcutaneous(More)
Harriet J. Fell College of Computer Science Northeastern University Boston, Massachusetts 02115, USA Tel: +1-617-373-2198 fell@ccs.neu.edu Joel MacAuslan Karen Chenausky Speech Technology and Applied Research Lexington, Massachusetts 02173, USA Tel: +1-781-863-0310 starcorp@ix.netcom.com Linda J. Ferrier Department of Speech Language Pathology and Audiology(More)
According to the U.S. National Institutes of Health, approximately 500,000 Americans have Parkinson's disease (PD), with roughly another 50,000 receiving new diagnoses each year. 70%-90% of these people also have the hypokinetic dysarthria associated with PD. Deep brain stimulation (DBS) substantially relieves motor symptoms in advanced-stage patients for(More)
The visiBabble system processes infant vocalizations in real-time. It responds to the infant's syllable-like productions with brightly colored animations and records the acoustic-phonetic analysis. The system reinforces the production of syllabic utterances that are associated with later language and cognitive development. We report here on the development(More)
In electrolaryngeal speech, an excitation signal is provided by means of a buzzer held against the neck which is usually operated at a constant frequency rate. While such Transcutaneous Artificial Larynges (TALs) provide a means for verbal communication for people who are unable to use their own, the monotone F0 pattern results in poor speech quality. In(More)
Understanding the difference between emotions based on acoustic features is important for computer recognition and classification of emotions. We conducted a study of human perception of six emotions based on three perceptual dimensions and compared the human classification with machine classification based on many acoustic parameters. Results show that the(More)
We have developed software based on the Stevens landmark theory to extract features in utterances in and adjacent to voiced regions. We then apply two statistical methods, closest-match (CM) and principal components analysis (PCA), to these features to classify utterances according to their emotional content. Using a subset of samples from the Actual Stress(More)
The visiBabble system processes infant vocalizations in real-time. It responds to the infant's syllable-like productions with brightly colored animations and auditory feedback. It saves an audio recording and its acoustic-phonetic analysis. The system reinforces the production of syllabic utterances that are associated with later language and cognitive(More)