#### Filter Results:

- Full text PDF available (108)

#### Publication Year

1990

2017

- This year (1)
- Last 5 years (20)
- Last 10 years (59)

#### Publication Type

#### Co-author

#### Journals and Conferences

#### Brain Region

#### Cell Type

#### Data Set Used

#### Key Phrases

#### Method

#### Organism

Learn More

- Yoav Freund, Robert E. Schapire
- EuroCOLT
- 1995

In the rst part of the paper we consider the problem of dynamically apportioning resources among a set of options in a worst-case on-line framework. The model we study can be interpreted as a broad, abstract extension of the well-studied on-line prediction model to a general decision-theoretic setting. We show that the multiplicative weight-update rule of… (More)

- Yoav Freund, Robert E. Schapire
- ICML
- 1996

In an earlier paper, we introduced a new " boosting " algorithm called AdaBoost which, theoretically, can be used to significantly reduce the error of any learning algorithm that consistently generates classifiers whose performance is a little better than random guessing. We also introduced the related notion of a " pseudo-loss " which is a method for… (More)

- Yoav Freund, Raj D. Iyer, Robert E. Schapire, Yoram Singer
- Journal of Machine Learning Research
- 1998

We study the problem of learning to accurately rank a set of objects by combining a given collection of ranking or preference functions. This problem of combining preferences arises in several applications, such as that of combining the results of different search engines, or the " collaborative-filtering " problem of ranking movies for a user based on the… (More)

- Peter Auer, Nicolò Cesa-Bianchi, Yoav Freund, Robert E. Schapire
- SIAM J. Comput.
- 2002

In the multiarmed bandit problem, a gambler must decide which arm of K non-identical slot machines to play in a sequence of trials so as to maximize his reward. This classical problem has received much attention because of the simple model it provides of the trade-off between exploration (trying out each arm to find the best one) and exploitation (playing… (More)

- Robert E. Schapire, Yoav Freund, Peter Barlett, Wee Sun Lee
- ICML
- 1997

One of the surprising recurring phenomena observed in experiments with boosting is that the test error of the generated classifier usually does not increase as its size becomes very large, and often is observed to decrease even after the training error reaches zero. In this paper, we show that this phenomenon is related to the distribution of margins of the… (More)

Boosting is a general method for improving the accuracy of any given learning algorithm. This short overview paper introduces the boosting algorithm AdaBoost, and explains the underlying theory of boosting, including an explanation of why boosting often does not suffer from overfitting as well as boosting's relationship to support-vector machines. Some… (More)

- Yoav Freund
- COLT
- 1990

- Peter Auer, Nicolò Cesa-Bianchi, Yoav Freund, Robert E. Schapire
- Electronic Colloquium on Computational Complexity
- 1995

In the multi-armed bandit problem, a gambler must decide which arm of K non-identical slot machines to play in a sequence of trials so as to maximize his reward. This classical problem has received much attention because of the simple model it provides of the trade-off between exploration (trying out each arm to find the best one) and exploitation (playing… (More)

- Yoav Freund, Llew Mason
- ICML
- 1999

The application of boosting procedures to decision tree algorithms has been shown to produce very accurate classiiers. These classi-ers are in the form of a majority v ote over a n umber of decision trees. Unfortunately, these classiiers are often large, complex and diicult to interpret. This paper describes a new type of classiication rule, the alternating… (More)

- Yoav Freund, H. Sebastian Seung, Eli Shamir, Naftali Tishby
- Machine Learning
- 1997

We analyze the “query by committee” algorithm, a method for filtering informative queries from a random stream of inputs. We show that if the two-member committee algorithm achieves information gain with positive lower bound, then the prediction error decreases exponentially with the number of queries. We show that, in particular, this exponential decrease… (More)