Understanding internal representations and generalization properties in backpropagation networks


Summary form only given, as follows. Evidence is presented which supports a hypothesis that the behavior and properties of backpropagation (BP) networks with binary input/output values can be interpreted and predicted using propositional logic. First it is shown that any n-to-m mapping of binary values can be fully described by deriving expressions of Boolean operators. When converted to 'conjunctive normal' form, these expressions form the basis for designing near minimally connected networks capable of computing any arbitrary mapping function. The proposition is then explored that if such interpretations of BP network internal representations are correct, they should be able to predict how networks will generalize, having been trained with a partial set of input/response patterns. Experimental data are presented which support this hypothesis. Finally, the significance of these preliminary findings is discussed.<<ETX>>

Cite this paper

@article{Kamangar1989UnderstandingIR, title={Understanding internal representations and generalization properties in backpropagation networks}, author={Farhad A. Kamangar and Ladawna Leeth}, journal={International 1989 Joint Conference on Neural Networks}, year={1989}, pages={626 vol.2-} }