Skip to search formSkip to main content
You are currently offline. Some features of the site may not work correctly.

Early stopping

In machine learning, early stopping is a form of regularization used to avoid overfitting when training a learner with an iterative method, such as… Expand
Wikipedia

Papers overview

Semantic Scholar uses AI to extract papers important to this topic.
Highly Cited
2019
Highly Cited
2019
The effort devoted to hand-crafting neural network image classifiers has motivated the use of architecture search to discover… Expand
  • figure 1
  • figure 2
  • figure 3
  • figure 4
  • table 1
Highly Cited
2012
Highly Cited
2012
  • L. Prechelt
  • Neural Networks: Tricks of the Trade
  • 2012
  • Corpus ID: 14049040
Validation can be used to detect when overfitting starts during supervised training of a neural network; training is then stopped… Expand
  • figure 1
  • figure 2
  • table 1
  • figure 3
  • figure 4
Highly Cited
2011
Highly Cited
2011
The goal of non-parametric regression is to estimate an unknown function f<sup>∗</sup> based on n i.i.d. observations of the form… Expand
  • figure 1
  • figure 2
  • figure 3
  • figure 4
Highly Cited
2010
Highly Cited
2010
Model selection strategies for machine learning algorithms typically involve the numerical optimisation of an appropriate model… Expand
  • figure 1
  • table 1
  • figure 2
  • figure 3
  • figure 4
Highly Cited
2008
Highly Cited
2008
Sampling is a popular way of scaling up machine learning algorithms to large datasets. The question often is how many samples are… Expand
  • figure 1
  • figure 2
  • figure 3
  • figure 4
  • table 1
Highly Cited
2006
Highly Cited
2006
Many of the classification algorithms developed in the machine learning literature, including the support vector machine and… Expand
Highly Cited
2005
Highly Cited
2005
Boosting is one of the most significant advances in machine learning for classification and regression. In its original and… Expand
  • figure 1
  • figure 2
  • figure 3
  • figure 4
  • figure 5
Highly Cited
2005
Highly Cited
2005
We introduce a new iterative regularization procedure for inverse problems based on the use of Bregman distances, with particular… Expand
  • figure 1
  • figure 2
  • figure 3
  • figure 4
  • figure 5
Highly Cited
2000
Highly Cited
2000
The conventional wisdom is that backprop nets with excess hidden units generalize poorly. We show that nets with excess capacity… Expand
  • figure 1
  • figure 2
  • figure 3
  • figure 4
  • figure 5
Highly Cited
1982
Highly Cited
1982
An iterative method is given for solving Ax ~ffi b and minU Ax b 112, where the matrix A is large and sparse. The method is based… Expand
  • table I
  • figure 1
  • figure 2
  • figure 3
  • figure 4