A Regularizer Approach for RBF Networks Under the Concurrent Weight Failure Situation
Biological neural networks are spectacularly more energy efficient than currently available man-made, transistor-based information processing units. Additionally, biological systems do not suffer catastrophic failures when subjected to physical damage, but experience proportional performance degradation. Hardware neural networks promise great advantages in information processing tasks that are inherently parallel or are deployed in an environment where the processing unit might be susceptible to physical damage. This paper, intended for hardware neural network applications, presents analysis of performance degradation of various architectures of artificial neural networks when subjected to ‘stuck-at-0’ and ‘stuck-at-1’ faults. This study aims to determine if a fixed number of neurons should be kept in a single or multiple hidden layers. Faults are administered to input and hidden layer(s) and analysis of unoptimized and optimized, feedforward and recurrent networks, trained with uncorrelated and correlated data sets is conducted. A comparison of networks with single, dual, triple, and quadruple hidden layers is quantified. The main finding is that ‘stuck-at-0’ faults administered to input layer result in least performance degradation in networks with multiple hidden layers. However, for ‘stuck-at-0’ faults occurring to cells in hidden layer(s), the architecture that sustains the least damage is that of a single hidden layer. When ‘stuck-at-1’ errors are applied to either input or hidden layers, the network(s) that offer the most resilience are those with multiple hidden layers. The study suggests that hardware neural network architecture should be chosen based on the most likely type of damage that the system may be subjected to, namely damage to sensors or the neural network itself.