Training with Noise is Equivalent to Tikhonov Regularization

Abstract

It is well known that the addition of noise to the input data of a neural network during training can, in some circumstances, lead to significant improvements in generalization performance. Previous work has shown that such training with noise is equivalent to a form of regularization in which an extra term is added to the error function. However, the regularization term, which involves second derivatives of the error function, is not bounded below, and so can lead to difficulties if used directly in a learning algorithm based on error minimization. In this paper we show that, for the purposes of network training, the regularization term can be reduced to a positive definite form which involves only first derivatives of the network mapping. For a sum-of-squares error function, the regularization term belongs to the class of generalized Tikhonov regularizers. Direct minimization of the regularized error function provides a practical alternative to training with noise. 1 Regularization A feed-forward neural network can be regarded as a parametrized non-linear mapping from a d-dimensional input vector x = (x1, . . . , xd) into a c-dimensional output vector y = (y1, . . . , yc). Supervised training of the network involves minimization, with respect to the network parameters, of an error function, defined in terms of a set of input vectors x and corresponding desired (or target) output vectors t. A common choice of error function is the sum-of-squares error of the form E = 1 2 ∫ ∫ ‖y(x) − t‖ p(x, t) dx dt (1)

050'95'97'99'01'03'05'07'09'11'13'15'17
Citations per Year

603 Citations

Semantic Scholar estimates that this publication has 603 citations based on the available data.

See our FAQ for additional information.

Cite this paper

@inproceedings{Bishop1994TrainingWN, title={Training with Noise is Equivalent to Tikhonov Regularization}, author={Christopher M. Bishop}, year={1994} }