James Carmichael

Learn More
We describe an unusual ASR application: recognition of command words from severely dysarthric speakers, who have poor control of their articulators. The goal is to allow these clients to control assistive technology by voice. While this is a small vocabulary, speaker-dependent, isolated-word application, the speech material is more variable than normal, and(More)
Computer based speech training systems aim to provide the client with customised tools for improving articulation based on audiovisual stimuli and feedback. They require the integration of various components of speech technology, such as speech recognition and transcription tools, and a database management system which supports multiple on-the-fly(More)
This study reports on the development of an automated isolated-word intelligibility metric system designed to improve the scoring consistency and reliability of the Frenchay Dysarthria Assessment Test (FDA).The proposed intelligibility measurements are based on the probabilistic likelihood scores derived from the forced alignment of the dysarthric speech to(More)
Automatic speech recognition (ASR) can provide a rapid means of controlling EAT. Off-the-shelf ASR systems function poorly for users with severe dysarthria because of the increased variability of their articulations compared to 'normal' speech. A two-pronged approach has been applied to this problem: 1. To develop a computerised training package which will(More)
This paper describes a new formulation of a polynomial sequence kernel based on dynamic time warping (DTW) for support vector machine (SVM) classification of isolated words given very sparse training data. The words are uttered by dysarthric speakers who suffer from debilitating neurological conditions that make the collection of speech samples a(More)
This paper describes a multimedia multimodal information access subsystem (MIAS) for digital audiovisual documents, typically presented in streaming media format. The system is designed to provide both professional and general users with entry points into video documents that are relevant to their information needs. In this work, we focus on the information(More)
This study discusses the findings of an evaluation study on the performance of a multimedia multimodal information access subsystem (MIAS), incorporating automatic speech recognition technology (ASR) to automatically transcribe the speech content of video soundtracks. The study's results indicate that an information-rich but minimalist graphical interface(More)
This report discusses the implementation of a computerized application - the Computerised Frenchay Dysarthria Assessment Procedure (CFDA) - which uses digital signal processing (DSP) techniques to objectively evaluate digitised speech recordings in order to detect any symptoms of dysarthria (a type of motor speech disorder). This investigation focuses(More)
This study reports on the performance of a computerised digital signal processing system, known as the Computerised Frenchay Dysarthria Assessment (CFDA), which is designed to diagnose two sub-types of dysarthria – a family of speech disorders characterised by loss of control over the organs which facilitate speech production. This investigation explores(More)