Learning Continuous Attractors in Recurrent Networks

@inproceedings{Seung1997LearningCA,
  title={Learning Continuous Attractors in Recurrent Networks},
  author={H. Sebastian Seung},
  booktitle={NIPS},
  year={1997}
}
One approach to invariant object recognition employs a recurrent neural network as an associative memory. In the standard depiction of the network's state space, memories of objects are stored as attractive fixed points of the dynamics. I argue for a modification of this picture: if an object has a continuous family of instantiations, it should be represented by a continuous attractor. This idea is illustrated with a network that learns to complete patterns. To perform the task of filling in… CONTINUE READING