Methods for integrating memory into neural networks applied to condition monitoring

Abstract

A criticism of neural network architectures is their susceptibility to “catastrophic interference” the ability to forget previously learned data when presented with new patterns. To avoid this, neural network architectures have been developed which specifically provide the network with a memory, either through the use of a context unit, which can store patterns for later recall, or which combine high-levels of recurrency coupled with some form of backpropagation. We have evaluated two architectures which utilise these concepts, namely, Hopfield and Elman networks, respectively and compared their performance to self-organising feature maps using timesmoothed moving average data and Time delayed neural networks. Our results indicate clear improvements in performance for networks incorporating memory into their structure. However the degree of improvement depends largely upon the architecture used, and the provision of a context layer for the storage and recall of patterns.

4 Figures and Tables

Cite this paper

@inproceedings{Addison2002MethodsFI, title={Methods for integrating memory into neural networks applied to condition monitoring}, author={J F Dale Addison and Stefan Wermter and Kenneth J McGarry and John MacIntyre}, year={2002} }