Relaxation Networks for Large Supervised Learning Problems

@inproceedings{Alspector1990RelaxationNF,
  title={Relaxation Networks for Large Supervised Learning Problems},
  author={Joshua Alspector and Robert B. Allen and Anthony Jayakumar and Torsten Zeppenfeld and Ron Meir},
  booktitle={NIPS},
  year={1990}
}
Feedback connections are required so that the teacher signal on the output neurons can modify weights during supervised learning. Relaxation methods are needed for learning static patterns with full-time feedback connections. Feedback network learning techniques have not achieved wide popularity because of the still greater computational efficiency of back-propagation. We show by simulation that relaxation networks of the kind we are implementing in VLSI are capable of learning large problems… CONTINUE READING

Citations

Publications citing this paper.
SHOWING 1-10 OF 14 CITATIONS

VLSI Implementations of Learning and Memory Systems: A Review

  • NIPS 1990
  • 1990
VIEW 4 EXCERPTS
CITES BACKGROUND & METHODS
HIGHLY INFLUENCED

An Adaptable Continuous Restricted Boltzmann Machine in VLSI for Fusing the Sensory Data of an Electronic Nose

  • IEEE Transactions on Neural Networks and Learning Systems
  • 2017
VIEW 2 EXCERPTS
CITES BACKGROUND & METHODS

The Diffusion Network in Analog VLSI Exploiting Noise-Induced Stochastic Dynamics to Regenerate Various Continuous Paths

  • IEEE Transactions on Circuits and Systems I: Regular Papers
  • 2015
VIEW 1 EXCERPT
CITES METHODS

A Study of the Mean Field Approach to Knapsack Problems

  • Neural Networks
  • 1997
VIEW 3 EXCERPTS
CITES METHODS & BACKGROUND

On-chip learning in neurocomputers

  • Proceedings of 1996 Canadian Conference on Electrical and Computer Engineering
  • 1996

The impact of VLSI fabrication on neural learning

  • Proceedings of ISCAS'95 - International Symposium on Circuits and Systems
  • 1995