Learn More
This paper introduces the new OLdenburg LOgatome speech corpus (OLLO) and outlines design considerations during its creation. OLLO is distinct from previous ASR corpora as it specifically targets (1) the fair comparison between human and machine speech recognition performance, and (2) the realistic representation of intrinsic variabilities in speech that(More)
Independent component analysis (ICA) has proven useful for modeling brain and electroencephalographic (EEG) data. Here, we present a new, generalized method to better capture the dynamics of brain signals than previous ICA algorithms. We regard EEG sources as eliciting spatio-temporal activity patterns, corresponding to, e.g. trajectories of activation(More)
Blind source separation is commonly based on maximizing measures related to independence of estimated sources such as mutual statistical independence assuming non-Gaussian distributions, decorrelation at different time-lags assuming spectral differences or decorrelation assuming source non-stationarity. Here, the use of an alternative model for source(More)
Blind source represents a signal processing technique with a large potential for noise reduction. However, its application in modern digital hearing aids poses high demands with respect to computational efficiency and speed of adaptation towards the desired solution. In this paper, an algorithm is presented which fulfills these goals under the idealized(More)
Independent component analysis (ICA) of functional magnetic resonance imaging (fMRI) data is commonly carried out under the assumption that each source may be represented as a spatially fixed pattern of activation, which leads to the instantaneous mixing model. To allow modeling patterns of spatio-temporal dynamics, in particular, the flow of oxygenated(More)
Robust detection of speech embedded in real acoustic background noise is considered using an approach based on sub-band amplitude modulation spectral (AMS) features and trained discriminative classifiers. Performance is evaluated in particular for situations in which speech is embedded in acoustic backgrounds not presented during classifier training, and(More)
In this contribution we present a feature extraction method that relies on the modulation-spectral analysis of amplitude fluctuations within sub-bands of the acoustic spectrum by a STFT. The experimental results indicate that the optimal temporal filter extension for amplitude modulation analysis is around 310 ms. It is also demonstrated that the phase(More)
A classification method is presented that detects the presence of speech embedded in a real acoustic background of non-speech sounds. Features used for classification are modulation components extracted by computation of the amplitude modulation spectrogram. Feature selection techniques and support vector classification are employed to identify modulation(More)