A parallel ASIC VLSI neurocomputer for a large number of neurons and billion connections per second speed

@article{Shimokawa1991APA,
  title={A parallel ASIC VLSI neurocomputer for a large number of neurons and billion connections per second speed},
  author={Y. Shimokawa and Y. Fuwa and N. Aramaki},
  journal={[Proceedings] 1991 IEEE International Joint Conference on Neural Networks},
  year={1991},
  pages={2162-2167 vol.3}
}
A programmable high-performance and high-speed neurocomputer for a large neural network is developed using an application specific IC (ASIC) neurocomputing chip made by CMOS VLSI technology. The neurocomputer consists of one master node and multiple slave nodes which are connected by two data paths, a broadcast bus and a ring bus. The neurocomputer was built on one printed circuit board having 50 VLSI chips that offers 1-2 billion connections/s. This computer uses SIMD (single-instruction… Expand
Recent VLSI neural networks in Japan
  • Y. Hirai
  • Computer Science
  • J. VLSI Signal Process.
  • 1993
TLDR
Recent research activities on the development of electronic neural networks in Japan are reviewed and a fully interconnected PDM digital neural network system has been developed. Expand
High Performance Multilayer Perceptron on a CustomComputing
Multilayer perceptrons (MLPs) are one of the most popular neural network models for solving pattern classiication and image classiication problems. Because of their ability to learn complex decisionExpand
Performance of a bus-based parallel computer with integer-representation processors applied to artificial neural network and parallel AI domains
  • M. Yasunaga, Akio Yamada, T. Okahashi
  • Computer Science
  • 1998 Second International Conference. Knowledge-Based Intelligent Electronic Systems. Proceedings KES'98 (Cat. No.98EX111)
  • 1998
TLDR
This paper presents effective implementation techniques of backpropagation (BP) and memory-based reasoning (MBR) onto MY-NEUPOWER, and speed and scalability with these techniques are measured and high performance with cost-effectiveness is shown being compared with those of other computers. Expand
Cost-efficient FPGA implementation of a biologically plausible dopamine neural network and its application
TLDR
A modified dopamine neuron model based on piecewise linearisation is presented for efficient realisation to reduce the hardware overhead of the original dopamine model and improve the feasibility of the digital design, which is significant for the large-scale network emulation of dopamine system. Expand
Field Programmable Gate Array (FPGA): A Tool For Improving Parallel Computations
TLDR
This research work explores the underlying parallel architecture of this Field Programmable Gate Array (FPGA) and the design methodology, the design tools for implementing FPGAs is discussed e.g. System Generator from Xilinx, Impulse C programming model etc. Expand
Computer Vision Algorithms on Reconfigurable Logic Arrays
TLDR
The use of the custom computing approach to meet the computation and communication needs of computer vision algorithms by customizing hardware architecture at the instruction level for every application so that the optimal grain size needed for the problem at hand and the instruction granularity can be matched. Expand
Optimization of multimedia applications on embedded multicore processors
TLDR
This research designs new parallel algorithms and mapping methodologies in order to exploit the natural existence of parallelism in multimedia applications, specifically the H.264/AVC video decoder and mainly target symmetric shared-Memory multiprocessors (SMPs) for embedded devices such as ARM Cortex-A9 multicore chips. Expand
Atomic Switch Networks for Neuroarchitectonics: Past, Present, Future
TLDR
This work reports the fabrication of an atomic switch network (ASN) showing critical dynamics and harness criticality to perform benchmark signal classification and Boolean logic tasks and observed evidence of biomimetic behavior enable the ASN to attain a cognitive capability within the context of artificial neural networks. Expand
Application of genetic algorithms and maxplus system formalism in optimization of discrete system processes
  • J. Raszka, L. Jamroz
  • Computer Science
  • 2013 6th International Conference on Human System Interactions (HSI)
  • 2013
TLDR
This paper presents results of using some methods for optimization of the discrete system cyclic processes and proposes a new methodology to handle a complex variety of variables. Expand

References

SHOWING 1-4 OF 4 REFERENCES
An artificial neural network accelerator using general purpose 24 bit floating point digital signal processors
TLDR
An artificial neural network (ANN) accelerator named Neuro Turbo was implemented using four recently developed general-purpose 24-b floating-point digital signal processors (DSP) MB86220 using four ring-coupled DSPs and four dual-port memories. Expand
A VLSI architecture for high-performance, low-cost, on-chip learning
  • D. Hammerstrom
  • Computer Science
  • 1990 IJCNN International Joint Conference on Neural Networks
  • 1990
TLDR
Using state-of-the-art technology and innovative architectural techniques, the author's architecture approaches the speed and cost of analog systems while retaining much of the flexibility of large, general-purpose parallel machines. Expand
Neural network simulation at Warp speed: how we got 17 million connections per second
TLDR
Results indicate that linear systolic array machines can be efficient neural network simulators and are about eight times faster at simulating the NETtalk text-to-speech network than the fastest back-propagation simulator previously reported in the literature. Expand
A wafer scale integration neural network utilizing completely digital circuits
A wafer scale integration (WSI) neural network utilizing completely digital circuits is reported. Three new technologies are used: (1) time-sharing digital bus; (2) efficient utilization of weightExpand