• Corpus ID: 17475787

Interplay: Dispersed Activation in Neural Networks

@article{Churchill2012InterplayDA,
  title={Interplay: Dispersed Activation in Neural Networks},
  author={Richard L. Churchill},
  journal={ArXiv},
  year={2012},
  volume={abs/1210.6082}
}
This paper presents a multi-point stimulation of a Hebbian neural network with investigation of the interplay between the stimulus waves through the neurons of the network. Equilibrium of the resulting memory is achieved for recall of specific memory data at a rate faster than single point stimulus. The interplay of the intersecting stimuli appears to parallel the clarification process of recall in biological systems. 

References

SHOWING 1-10 OF 13 REFERENCES

Single Neuron Memories and the Network's Proximity Matrix

  • S. Kak
  • Computer Science
    ArXiv
  • 2009
This paper extends the treatment of single-neuron memories obtained by the B-matrix approach. The spreading of the activity within the network is determined by the network's proximity matrix which

Feedback neural networks: new characteristics and a generalization

This work discusses in detail the question of updating of neurons given incomplete information about the state of the neural network and shows how the mechanism of self-indexing for such updating provides better results than assigning ‘don't know’ values to the missing parts of the state vector.

Self indexing of neural memories

  • S. Kak
  • Psychology
    IEEE International Symposium on Circuits and Systems
  • 1990
It is shown that the property of self-indexing allows stable memories to be generated by very short subsequences. This defines a mechanism for sequential associative memory and for the sequential

Active agents, intelligence and quantum computing

  • S. Kak
  • Computer Science, Psychology
    Inf. Sci.
  • 2000

Networks of the Brain

Models of Network Growth All networks, whether they are social, technological, or biological, are the result of a growth process. Many of these networks continue to grow for prolonged periods of

New algorithms for training feedforward neural networks

  • S. Kak
  • Computer Science
    Pattern Recognit. Lett.
  • 1994

The Three Languages of the Brain: Quantum, Reorganizational, and Associative

It is being recognized that stimulus-response constructs such as “drive” are often inadequate in providing explanations; and one invokes the category “effort” to explain autonomous behavior.

A class of instantaneously trained neural networks

  • S. Kak
  • Computer Science
    Inf. Sci.
  • 2002

On Generalization by Neural Networks

  • S. Kak
  • Computer Science
    Inf. Sci.
  • 1998

Active Agents

This survey aims to outline a new framework for epistemic logic developed jointly with S. Andur Pedersen unifying some key ``mainstream''epistemological concerns with the ``formal'' epistemological apparatus.