A Generic Building Block For Hopfield Neural Networks With On-chip Learning

  title={A Generic Building Block For Hopfield Neural Networks With On-chip Learning},
  author={Michael K. Gschwind and Valentina Salapura and Oliver Maischberger},
  journal={1996 IEEE International Symposium on Circuits and Systems. Circuits and Systems Connecting the World. ISCAS 96},
We present an extendable digital architecture for the implementation of a Hofield neural network using fieldprogrammable gate arrays (FPGAs). Due to its bit-serialk implementation, the actual digital circuitry is simple and highly regular, thus allowing efficient space usage of FPGAS. We exploit the reprogrammability of these devices to support on-chip learning. 
This paper presents a discrete device for neural network realized on field-programmable gate arrays (FPGA). A basic element of the implemented neural network is new type of neuron, called BooleanExpand
Design of a Stochastic Re-Configurable Artificial Neural Networks Using FPGA
The architecture combines stochastic computation techniques with a novel Lookup-Table-based that fully exploits the Look Up-Table structure of many FPGAs and basic operations of simple ANN are mapped into a modular design, providing easy scalability of the system to the different applications constraints and requirements. Expand
FPGA Implementation of Boolean Neural Networks using UML
This paper suggests a new approach for modeling of Boolean neural networks on fieldprogrammable gate arrays (FPGAs) using UML. The presented Boolean neural networks (BNN) allow a decreasing of theExpand
The Structure of Boolean Neuron for the Optimal Mapping to FPGAs
In this paper, we present a new type of neuron, called Boolean neuron that may be mapped directly to configurable logic blocks (CLBs) of field-programmable gate arrays (FPGAs). The structure andExpand
FPNA: Concepts and Properties
This two-chapter study gathers the different results that have been published about the FPNA concept, as well as some unpublished ones and proposes a general two-level definition of FPNAs, where the computational power of FPNA-based neural networks is characterized through the concept of underparameterized convolutions. Expand
FPNA: Interaction Between FPGA and Neural Computation
  • B. Girau
  • Computer Science, Medicine
  • Int. J. Neural Syst.
  • 2000
FPGAs have led to the definition of the FPNA computation paradigm, and how FPNAs contribute to current and future FPGA-based neural implementations by solving the general problems that are raised by the implementation of complex neural networks onto FPGAs. Expand
A Survey of Neuromorphic Computing and Neural Networks in Hardware
An exhaustive review of the research conducted in neuromorphic computing since the inception of the term is provided to motivate further work by illuminating gaps in the field where new research is needed. Expand
A Survey Comparing Specialized Hardware And Evolution In TPUs For Neural Networks
This survey paper is based on the evolution of TPUs from first generation TPUs to edge TPUs and their architectures. This paper compares CPUs, GPUs, FPGAs and TPUs, their hardware architectures,Expand
Du parallélisme connexionniste à une pratique de calcul distribué numérique bio-inspiré
La defense du parallelisme connexionniste passe par une etude focalisee sur les mecanismes locaux de gestion du flux d'informations sous-jacent a ces modeles, ne semble pas raisonnable. Expand


GANGLION-a fast field-programmable gate array implementation of a connectionist classifier
The authors take advantage of the reprogrammability of the devices to automatically generate new custom hardware for each application of the classifier, which is a totally digital connectionist classifier. Expand
A stochastic neural architecture that exploits dynamically reconfigurable FPGAs
An expandable digital architecture that provides an efficient real time implementation platform for large neural networks and makes heavy use of the techniques of bit serial stochastic computing to carry out the large number of required parallel synaptic calculations. Expand
An analog CMOS chip set for neural networks with arbitrary topologies
An analog CMOS chip set for implementations of artificial neural networks (ANNs) has been fabricated and tested and contains an array of 4 neurons with well defined hyperbolic tangent activation functions which is implemented by using parasitic lateral bipolar transistors. Expand
A feedforward artificial neural network based on quantum effect vector-matrix multipliers
A small three-layer feedforward prototype network with five binary neurons and six tri-state synapses was built and used to perform all of the fundamental logic functions: XOR, AND, OR, and NOT. Expand
Fast neural networks without multipliers
Some test cases are presented, concerning MLPs with hidden layers of different sizes, on pattern recognition problems, to demonstrate the validity and the generalization capability of the method and give some insight into the behavior of the learning algorithm. Expand
Recursive neural networks for associative memory
  • Y. Kamp, M. Hasler
  • Computer Science
  • Wiley-interscience series in systems and optimization
  • 1990
The Deterministic Approach to Network Design:Principles, Problems and Approaches and the Statistical Approach. Expand
A Fast FPGA Implementation of a General Purpose Neuron
This work presents an expandable digital architecture which allows fast and spaceefficient computation of the sum of weighted inputs, providing an efficient implementation base for large neural networks. Expand
A field - programmable gate array implementation of a selfadapting and scalable connectionist network
  • 1994
A field-programmable gate array implementation of a self-adapting and scalable connectionist network. Master's thesis
  • A field-programmable gate array implementation of a self-adapting and scalable connectionist network. Master's thesis
  • 1994
Xil93] Xilinx. The Programmable Logic Data Book
  • Xil93] Xilinx. The Programmable Logic Data Book
  • 1993