Learn More
Multimodal user interfaces (MMUI) allow users to control computers using speech and gesture, and have the potential to minimise users. experienced cognitive load, especially when performing complex tasks. In this paper, we describe our attempt to use a physiological measure, namely Galvanic Skin Response (GSR), to objectively evaluate users. stress and(More)
Extensive research has been devoted to robustness in the presence of various types and degrees of environmental noise over the past several years, however this remains one of the main problems facing automatic speech recognition systems. This paper describes a new variable frame rate analysis technique, based upon searching a predefined lookahead interval(More)
This paper describes a novel noise-robust automatic speech recognition (ASR) front-end that employs a combination of Mel-filterbank output compensation and cumulative distribution mapping of cepstral coefficients with truncated Gaussian distribution. Recognition experiments on the Aurora II connected digits database reveal that the proposed front-end(More)
Copper toxicity contributes to neuronal death in Wilson's disease and has been speculatively linked to the pathogenesis of Alzheimer's and prion diseases. We examined copper-induced neuronal death with the goal of developing neuroprotective strategies. Copper catalyzed an increase in hydroxyl radical generation in solution, and the addition of 20 microM(More)
The distinctive cellular and mitochondrial dysfunctions of two human lung cancer cell lines (H460 and HCC1588) from two human lung normal cell lines (MRC5 and L132) have been studied by dielectric barrier discharge (DBD) plasma treatment. This cytotoxicity is exposure time-dependent, which is strongly mediated by the large amount of H2O2 and NOx in culture(More)
Multimodal interfaces are known to be useful in map-based applications, and in complex, time-pressure based tasks. Cognitive load variations in such tasks have been found to impact multimodal behaviour. For example, users become more multimodal and tend towards semantic complementarity as cognitive load increases. The richness of multimodal data means that(More)
Speech is a promising modality for the convenient measurement of cognitive load, and recent years have seen the development of several cognitive load classification systems. Many of these systems have utilised mel frequency cepstral coefficients (MFCC) and pro-sodic features like pitch and intensity to discriminate between different cognitive load levels.(More)