Yingxue Wang

Learn More
Hardware implementations of spiking neurons can be extremely useful for a large variety of applications, ranging from high-speed modeling of large-scale neural systems to real-time behaving systems, to bidirectional brain-machine interfaces. The specific circuit solutions used to implement silicon neurons depend on the application requirements. In this(More)
In spiking neural networks, asynchronous spike events are processed in parallel by neurons. Emulations of such networks are traditionally computed by CPUs or realized using dedicated neuromorphic hardware. In many neuromorphic systems, the Address-Event-Representation (AER) is used for spike communication. In this paper we present the acceleration of AER(More)
In this paper, we present a conditional restricted Boltzmann machine (CRBM) based speech bandwidth extension (BWE) method. A CRBM is employed to obtain time information and model deep non-linear relationships between the spectral envelope features of low frequency (LF) and high frequency (HF). Two exclusive CRBMs are adopted to model the distribution of(More)
We describe a formalism for quantifying the performance of spike-based winner-take-all (WTA) VLSI chips. The WTA function non-linearly amplifies the output responses of pixels/neurons dependent on the input magnitudes in a decision or selection task. In this work, we show a theoretical description of this winner-take-all computation which takes into(More)
With the advent of new experimental evidence showing that dendrites play an active role in processing a neuron's inputs, we revisit the question of a suitable abstraction for the computing function of a neuron in processing spatiotemporal input patterns. Although the integrative role of a neuron in relation to the spatial clustering of synaptic inputs can(More)
This paper proposes a new speech bandwidth expansion method, which uses Deep Neural Networks (DNNs) to build high-order eigenspaces between the low frequency components and the high frequency components of the speech signal. A four-layer DNN is trained layer-by-layer from a cascade of Neural Networks (NNs) and two Gaussian-Bernoulli Restricted Boltzmann(More)
Capturing the functionality of active dendritic processing into abstract mathematical models will help us to understand the role of complex biophysical neurons in neuronal computation and to build future useful neuromorphic analog Very Large Scale Integrated (aVLSI) neuronal devices. Previous work based on an aVLSI multi-compartmental neuron model(More)