Despite of processing elements which are thousands of times faster than the neurons in the brain, modern computers still cannot match quite a few processing capabilities of the brain, many of which we even consider trivial (such as recognizing faces or voices, or following a conversation). A common principle for those capabilities lies in the use of… (More)
A neural network specification language is presented that can be used for the high-level description of artificial and biology-oriented neural networks. The main objective of the language design is the support of the inherent parallelism of neural networks so that efficient simulation code for parallel computers and neurocomputer architectures can be… (More)
This paper presents a new concept for a parallel neurocom-puter architecture which is based on a conngurable neuroprocessor design. The neuroprocessor adapts its internal parallelism dynamically to the required data precision for achieving an optimal utilization of the available hardware resources. This is realized by encoding a variable number of p… (More)
In this article the neural network speciication language EpsiloNN is presented. From an abstract speciication that is independent of the target computer architecture, a simulation source program for a workstation or a parallel computer can be generated. Neurocomputers requiring xed-point data types and arithmetic are supported too. The language design is… (More)
A new methodology for the generation of efficient parallel programs from high-level neural network specifications is presented. All possible mappings of the neural network onto the parallel processors are generated and evaluated by using a description of the parallel target architecture. Thus the optimal mapping can be determined at compile-time and… (More)
Apriori is a prominent data mining algorithm concerned with the problem of frequent itemset mining (FIM) which generally exhibits poor performance on general-purpose systems. This paper presents a novel hardware accelerator for Apriori, improving upon previous hardware acceleration efforts.
The MMX and SSE extensions of current Intel Pentium processors ooer a 4-way or 8-way SIMD parallelism to accelerate many vector or matrix applications. In this paper the performance of MMX and SSE for the implementation of neural networks is evaluated. It is shown that a speedup in the range from 1.3 to 9.8 for single neural operations and a total speedup… (More)