Learning internal representations by error propagation
Learning representations by back-propagating errors
Back-propagation repeatedly adjusts the weights of the connections in the network so as to minimize a measure of the difference between the actual output vector of the net and the desired output vector, which helps to represent important features of the task domain.
An interactive activation model of context effects in letter perception: I. An account of basic findings.
Parallel distributed processing: explorations in the microstructure of cognition, vol. 1: foundations
The fundamental principles, basic mechanisms, and formal analyses involved in the development of parallel distributed processing (PDP) systems are presented in individual chapters contributed by…
An interactive activation model of context effects in letter perception: part 1.: an account of basic findings
Forward Models: Supervised Learning with a Distal Teacher
This article demonstrates that certain classical problems associated with the notion of the “teacher” in supervised learning can be solved by judicious use of learned internal models as components of the adaptive system.
Parallel Distributed Processing: Explorations in the Microstructures of Cognition
In decades to come, perhaps 1986 will be remembered by academics as the year of publication of the pair of volumes reviewed here: they constitute the first large-scale public statement of an intellectual paradigm fully as revolutionary as the generative paradigm ever was.
Toward an interactive model of reading.
- D. Rumelhart
This chapter adapts a formalism developed in the context of parallel computation to the specification of a model for reading and shows that such a model can account in a convenient way for those aspects of reading that appear puzzling in the contexts of more linear stage-oriented models.
NOTES ON A SCHEMA FOR STORIES
- D. Rumelhart