Skip to search form
Skip to main content
>
Semantic Scholar
Semantic Scholar's Logo
Search
Sign In
Create Free Account
You are currently offline. Some features of the site may not work correctly.
Early stopping
In machine learning, early stopping is a form of regularization used to avoid overfitting when training a learner with an iterative method, such as…
Expand
Wikipedia
Create Alert
Related topics
Related topics
13 relations
AdaBoost
Artificial neural network
Boosting (machine learning)
Cross-validation (statistics)
Expand
Broader (1)
Machine learning
Papers overview
Semantic Scholar uses AI to extract papers important to this topic.
Highly Cited
2019
Highly Cited
2019
Regularized Evolution for Image Classifier Architecture Search
E. Real
,
A. Aggarwal
,
Y. Huang
,
Quoc V. Le
AAAI
2019
Corpus ID: 3640974
The effort devoted to hand-crafting neural network image classifiers has motivated the use of architecture search to discover…
Expand
Highly Cited
2012
Highly Cited
2012
Early Stopping - But When?
L. Prechelt
Neural Networks: Tricks of the Trade
2012
Corpus ID: 14049040
Validation can be used to detect when overfitting starts during supervised training of a neural network; training is then stopped…
Expand
Highly Cited
2011
Highly Cited
2011
Early stopping for non-parametric regression: An optimal data-dependent stopping rule
G. Raskutti
,
M. Wainwright
,
B. Yu
49th Annual Allerton Conference on Communication…
2011
Corpus ID: 7740214
The goal of non-parametric regression is to estimate an unknown function f<sup>∗</sup> based on n i.i.d. observations of the form…
Expand
Highly Cited
2010
Highly Cited
2010
On Over-fitting in Model Selection and Subsequent Selection Bias in Performance Evaluation
G. Cawley
,
N. Talbot
J. Mach. Learn. Res.
2010
Corpus ID: 1858029
Model selection strategies for machine learning algorithms typically involve the numerical optimisation of an appropriate model…
Expand
Highly Cited
2008
Highly Cited
2008
Empirical Bernstein stopping
V. Mnih
,
Csaba Szepesvari
,
J. Audibert
ICML '08
2008
Corpus ID: 215753448
Sampling is a popular way of scaling up machine learning algorithms to large datasets. The question often is how many samples are…
Expand
Highly Cited
2006
Highly Cited
2006
Convexity, Classification, and Risk Bounds
P. Bartlett
,
Michael I. Jordan
,
J. McAuliffe
2006
Corpus ID: 2833811
Many of the classification algorithms developed in the machine learning literature, including the support vector machine and…
Expand
Highly Cited
2005
Highly Cited
2005
Boosting with early stopping: Convergence and consistency
T. Zhang
,
B. Yu
2005
Corpus ID: 13158356
Boosting is one of the most significant advances in machine learning for classification and regression. In its original and…
Expand
Highly Cited
2005
Highly Cited
2005
An Iterative Regularization Method for Total Variation-Based Image Restoration
S. Osher
,
M. Burger
,
D. Goldfarb
,
J. Xu
,
Wotao Yin
Multiscale Model. Simul.
2005
Corpus ID: 618185
We introduce a new iterative regularization procedure for inverse problems based on the use of Bregman distances, with particular…
Expand
Highly Cited
2000
Highly Cited
2000
Overfitting in Neural Nets: Backpropagation, Conjugate Gradient, and Early Stopping
R. Caruana
,
S. Lawrence
,
C. Lee Giles
NIPS
2000
Corpus ID: 7365231
The conventional wisdom is that backprop nets with excess hidden units generalize poorly. We show that nets with excess capacity…
Expand
Highly Cited
1982
Highly Cited
1982
LSQR: An Algorithm for Sparse Linear Equations and Sparse Least Squares
C. Paige
,
M. Saunders
TOMS
1982
Corpus ID: 21774
An iterative method is given for solving Ax ~ffi b and minU Ax b 112, where the matrix A is large and sparse. The method is based…
Expand
By clicking accept or continuing to use the site, you agree to the terms outlined in our
Privacy Policy
,
Terms of Service
, and
Dataset License
ACCEPT & CONTINUE