Tet Hin Yeap

Learn More
A closed-loop or recurrent neural network was taught to generate output discharges to reproduce the prototypical activations in agonist and antagonist muscles which produce the displacement of a limb about a single joint. By introducing a generalized decrease in the excitability of the pre-output layer in the network, the network made the displacement more(More)
Real-time recurrent learning (RTRL), commonly employed for training a fully connected recurrent neural network (RNN), has a drawback of slow convergence rate. In the light of this deficiency, a decision feedback recurrent neural equalizer (DFRNE) using the RTRL requires long training sequences to achieve good performance. In this paper, extended Kalman(More)
— This paper presents an approach to enhance speech feature estimation in the log spectral domain under additive noise environments. A switching linear dynamic model (SLDM) is explored as a parametric model for the clean speech distribution, enforcing a state transition in the feature space and capturing the smooth time evolution of speech conditioned on(More)
We introduce the notions of <i>production</i> and <i>saturation time</i> for peer-to-peer real-time video-streaming networks. Due to the fact that video-streaming is divided into small blocks to transmit, <i>production</i>, adopted from economics, is defined as the number of users that have obtained video block <i>m</i> by time <i>t. Saturation time</i>(More)