An electrically trainable artificial neural network (ETANN) with 10240 'floating gate' synapses

  title={An electrically trainable artificial neural network (ETANN) with 10240 'floating gate' synapses},
  author={Mark A. Holler and Simon M. Tam and Hernan A. Castro and Ronald G. Benson},
  journal={International 1989 Joint Conference on Neural Networks},
  pages={191-196 vol.2}
  • M. Holler, S. Tam, R. Benson
  • Published 3 January 1990
  • Biology
  • International 1989 Joint Conference on Neural Networks
The use of floating-gate nonvolatile memory technology for analog storage of connection strengths, or weights, has previously been proposed and demonstrated. [] Key Method Each synapse in the network multiplies a signed analog voltage by a stored weight and generates a differential current proportional to the product. Differential currents are summed on a pair of bit lines and transferred through a sigmoid function, appearing at the neuron output as an analog voltage.

Figures from this paper

A programmable analog CMOS synapse for neural networks

A programmable analog synapse for use in both feedforward and feedback neural networks, which consists of two complementary floating-gate MOSFETs which are programmable in both directions by Fowler-Nordheim tunneling.

Implementation and performance of an analog nonvolatile neural network

An integrated circuit implementation of a fully parallel analog artificial neural network is presented and inevitable component to component variations due to the use of minimum dimension elements are found not to be significant for operation in an adaptive environment.

LSI implementation of pulse-output neural network with programmable synapse

The measured I/O characteristics of the neurons and the synapses were as expected and the ability of the network operation was demonstrated by assigning the synaptic weights as an A/D converter.

Digital-Analog Hybrid Synapse Chips for Electronic Neural Networks

A 64-neuron hardware incorporating four synapse chips has been fabricated to investigate the performance of feedback networks in optimization problem solving and the network's ability to obtain optimum or near optimum solutions in real time has been demonstrated.

70 input, 20 nanosecond pattern classifier

  • P. MasaK. HoenH. Wallinga
  • Computer Science
    Proceedings of 1994 IEEE International Conference on Neural Networks (ICNN'94)
  • 1994
A CMOS neural network integrated circuit is discussed, which was designed for very high speed vector classification and the feasibility of a single chip neural network photon trigger for nuclear research is shown.

An analog VLSI neural network with on-chip perturbation learning

An analog very large scale integration (VLSI) neural network intended for cost-sensitive, battery-powered, high-volume applications is described, with on-chip controlled perturbation-based gradient descent allowing fast learning with very little external support.

Single Transistor Learning Synapses

The design, fabrication, characterization, and modeling of an array of single transistor synapses that compute, learn, and provide non-volatile memory retention are described.

Artificial neural networks using MOS analog multipliers

A neural network implementation that uses MOSFET analog multipliers to construct weighted sums is described. The scheme permits asynchronous analog operation of Hopfield-style networks with fully

A Reconfigurable Analog VLSI Neural Network Chip

1024 distributed-neuron synapses have been integrated in an active area of 6.1 mm × 3.3 mm using a 0.9 µm, double-metal, single-poly, n-well CMOS technology to provide programmability of interconnections.

Comparison of floating gate neural network memory cells in standard VLSI CMOS technology

This work provides a layout for an analog neural network memory based on previously unexplored criteria and results and finds that the best designs need not utilize field enhancement techniques and can be used to provide a wide range of possible programming voltages.



Computing with neural circuits: a model.

A new conceptual framework and a minimization principle together provide an understanding of computation in model neural circuits that represent an approximation to biological neurons in which a simplified set of important computational properties is retained.

Reliability performance of ETOX based flash memories

The reliability performance of a 64 K flash memory based on a single-transistor, floating-gate memory cell is considered. The reliability performance of these memories, before program/erase cycling,

Memory Behavior in a Floating-Gate Avalanche-Injection MOS (famos) Structure

A novel charge‐storage structure is described. The floating‐gate avalanche‐injection MOS (FAMOS) structure is shown to exhibit memory behavior in the form of long‐term charge storage on the floating

A four-quadrant NMOS analog multiplier

A four-quadrant NMOS analog multiplier, which achieves linearity better than 0.3 percent at 75 percent of full-scale swing, a bandwidth of DC to 1.5 MHz, and output noise 77 dB below full scale is

IIArtificial Neural Network Implementation with Floating Gate MOS Devices

  • Hardware Implementation of fleuron Nets and Synapses, A Workshop sponsored by NSF and ONR

Brain Emulation Circuit with Reduced Confusiont1

    IIArtificial Neural Network Implementation with Floating Gate MOS

    • 1988

    Advanced Research in m

    • 1987

    A Neuromorphic VLSI Learning System8*

    • Proc. of the
    • 1987

    Neural Network Synaptic Connections Using Floating Gate Non-Volatile Elements

    • Neural Networks for Computins, AIP Conference Proceedinss
    • 1988