Skip to search formSkip to main content>Semantic Scholar Semantic Scholar's Logo

Search

You are currently offline. Some features of the site may not work correctly.

Semantic Scholar uses AI to extract papers important to this topic.

Highly Cited

2017

Highly Cited

2017

Despite their massive size, successful deep artificial neural networks can exhibit a remarkably small difference between training… Expand

Highly Cited

2005

Highly Cited

2005

A common assumption in supervised learning is that the training and test input points follow the same probability distribution… Expand

Highly Cited

2004

Highly Cited

2004

In order to compare learning algorithms, experimental results reported in the machine learning literature often use statistical… Expand

Highly Cited

2004

Highly Cited

2004

We prove generalization error bounds for predicting entries in a partially observed matrix by fitting the observed entries with a… Expand

Highly Cited

2004

Highly Cited

2004

Bagging (Breiman, 1994a) is a technique that tries to improve a learning algorithm's performance by using bootstrap replicates of… Expand

Highly Cited

2003

Highly Cited

2003

Bayesian approaches to learning and estimation have played a significant role in the Statistics literature over many years. While… Expand

Highly Cited

2002

Highly Cited

2002

We define notions of stability for learning algorithms and show how to use these notions to derive generalization error bounds… Expand

Highly Cited

2002

Highly Cited

2002

We prove new probabilistic upper bounds on generalization error of complex classifiers that are combinations of simple… Expand

Highly Cited

1996

Highly Cited

1996

It has been empirically shown that a better estimate with less generalization error can be obtained by averaging outputs of… Expand

Highly Cited

1992

Highly Cited

1992

This paper introduces stacked generalization, a scheme for minimizing the generalization error rate of one or more generalizers… Expand