Miquel L. Alomar

Learn More
Hardware implementation of artificial neural networks (ANNs) allows exploiting the inherent parallelism of these systems. Nevertheless, they require a large amount of resources in terms of area and power dissipation. Recently, Reservoir Computing (RC) has arisen as a strategic technique to design recurrent neural networks (RNNs) with simple learning(More)
This paper presents a new methodology for the hardware implementation of neural networks (NNs) based on probabilistic laws. The proposed encoding scheme circumvents the limitations of classical stochastic computing (based on unipolar or bipolar encoding) extending the representation range to any real number using the ratio of two bipolar-encoded pulsed(More)
Minimal hardware implementations able to cope with the processing of large amounts of data in reasonable times are highly desired in our information-driven society. In this work we review the application of stochastic computing to probabilistic-based pattern-recognition analysis of huge database sets. The proposed technique consists in the hardware(More)
Minimal hardware implementations of machine-learning techniques have been attracting increasing interest over the last decades. In particular, field-programmable gate array (FPGA) implementations of neural networks (NNs) are among the most appealing ones, given the match between system requirements and FPGA properties, namely, parallelism and adaptation.(More)
The hardware implementation of neural network models allows to efficiently exploit their inherent parallelism. Here, we focus on the Liquid State Machine (LSM) methodology to build recurrent Spiking Neural Networks (SNN), particularly suited to process time-dependent signals. We propose a low cost hardware implementation of LSM networks based on the use of(More)
Efficient hardware implementations of neural networks are of high interest. Stochastic computing is an alternative to conventional digital logic that allows to exploit the intrinsic parallelism of neural networks using few hardware resources. We present a new stochastic methodology that extends the capabilities of classical stochastic computing. In(More)
The hardware implementation of massive Recurrent Neural Networks to efficiently perform time dependent signal processing is an active field of research. In this work we review the basic principles of stochastic logic and its application to the hardware implementation of Neural Networks. In particular, we focus on the implementation of the recently(More)
In this work we show a study about which processes are related to chaotic and synchronized neural states based on the study of in-silico implementation of Stochastic Spiking Neural Networks (SSNN). Chaotic neural ensembles are excellent transmission and convolution systems. At the same time, synchronized cells (that can be understood as ordered states of(More)
Virtual screening (VS) has become a key computational tool in early drug design and screening performance is of high relevance due to the large volume of data that must be processed to identify molecules with the sought activity-related pattern. At the same time, the hardware implementations of spiking neural networks (SNNs) arise as an emerging computing(More)