Training with Noise is Equivalent to Tikhonov Regularization

Abstract

It is well known that the addition of noise to the input data of a neural network during training can, in some circumstances, lead to signiicant improvements in generalization performance. Previous work has shown that such training with noise is equivalent to a form of regularization in which an extra term is added to the error function. However, the regularization term, which involves second derivatives of the error function, is not bounded below, and so can lead to diiculties if used directly in a learning algorithm based on error minimization. In this paper we show that, for the purposes of network training, the regularization term can be reduced to a positive deenite form which involves only rst derivatives of the network mapping. For a sum-of-squares error function, the regularization term belongs to the class of generalized Tikhonov regularizers. Direct minimization of the regularized error function provides a practical alternative to training with noise.

Showing 1-8 of 8 references

Solutions of Ill-posed Problems, V H Winston and sons

  • A Tikhonov, N Arsenin
  • 1977
1 Excerpt

Minimize a sum-of-squares error function, and add noise to the input data during training

Showing 1-10 of 254 extracted citations
050'95'98'01'04'07'10'13'16
Citations per Year

625 Citations

Semantic Scholar estimates that this publication has received between 482 and 804 citations based on the available data.

See our FAQ for additional information.