Learn More
Real-time recurrent learning (RTRL), commonly employed for training a fully connected recurrent neural network (RNN), has a drawback of slow convergence rate. In the light of this deficiency, a decision feedback recurrent neural equalizer (DFRNE) using the RTRL requires long training sequences to achieve good performance. In this paper, extended Kalman(More)
This paper presents an approach to enhance speech feature estimation in the log spectral domain under noisy environments. A higher-order switching linear dynamic model (SLDM) is explored as a parametric model for the clean speech distribution, which enforces a state transition in the feature space and captures the smooth time evolution of speech conditioned(More)
A closed-loop or recurrent neural network was taught to generate output discharges to reproduce the prototypical activations in agonist and antagonist muscles which produce the displacement of a limb about a single joint. By introducing a generalized decrease in the excitability of the pre-output layer in the network, the network made the displacement more(More)
This letter presents a new approach to enhance speech feature estimation in the log spectral domain under noisy environments. A mixture of linear dynamic models with an architecture similar to the so-called mixture of experts (ME) is investigated to describe the clean speech feature distribution parametrically. Switching Kalman filters are adapted to the(More)
— This paper presents an approach to enhance speech feature estimation in the log spectral domain under additive noise environments. A switching linear dynamic model (SLDM) is explored as a parametric model for the clean speech distribution, enforcing a state transition in the feature space and capturing the smooth time evolution of speech conditioned on(More)