Recovering a Feed-Forward Net From Its Output

@inproceedings{Fefferman1993RecoveringAF,
  title={Recovering a Feed-Forward Net From Its Output},
  author={Charles Fefferman and Scott Markel},
  booktitle={NIPS},
  year={1993}
}
We study feed-forward nets with arbitrarily many layers, using the standard sigmoid, tanh x. Aside from technicalities, our theorems are: 1. Complete knowledge of the output of a neural net for arbitrary inputs uniquely specifies the architecture, weights and thresholds; and 2. There are only finitely many critical points on the error surface for a generic training problem. Neural nets were originally introduced as highly simplified models of the nervous system. Today they are widely used in… CONTINUE READING

From This Paper

Topics from this paper.

References

Publications referenced by this paper.

Uniqueness of the weights JOT minimal feedforward nets u'ith a given input-output map

  • H. Sussman
  • Neural Networks
  • 1992

Similar Papers

Loading similar papers…