#### Filter Results:

- Full text PDF available (113)

#### Publication Year

1990

2017

- This year (1)
- Last 5 years (22)
- Last 10 years (62)

#### Publication Type

#### Co-author

#### Journals and Conferences

#### Brain Region

#### Cell Type

#### Data Set Used

#### Key Phrases

#### Method

#### Organism

Learn More

- Yoav Freund, Robert E. Schapire
- EuroCOLT
- 1995

In the first part of the paper we consider the problem of dynamically apportioning resources among a set of options in a worst-case on-line framework. The model we study can be interpreted as a broad, abstract extension of the well-studied on-line prediction model to a general decision-theoretic setting. We show that the multiplicative weightupdate… (More)

- Yoav Freund, Robert E. Schapire
- ICML
- 1996

In an earlier paper, we introduced a new “boosting” algorithm called AdaBoost which, theoretically, can be used to significantly reduce the error of any learning algorithm that consistently generates classifiers whose performance is a little better than random guessing. We also introduced the related notion of a “pseudo-loss” which is a method for forcing a… (More)

- Peter Auer, Nicolò Cesa-Bianchi, Yoav Freund, Robert E. Schapire
- SIAM J. Comput.
- 2002

In the multiarmed bandit problem, a gambler must decide which arm of K nonidentical slot machines to play in a sequence of trials so as to maximize his reward. This classical problem has received much attention because of the simple model it provides of the trade-off between exploration (trying out each arm to find the best one) and exploitation (playing… (More)

- Robert E. Schapire, Yoav Freund, Peter Barlett, Wee Sun Lee
- ICML
- 1997

One of the surprising recurring phenomena observed in experiments with boosting is that the test error of the generated classifier usually does not increase as its size becomes very large, and often is observed to decrease even after the training error reaches zero. In this paper, we show that this phenomenon is related to the distribution of margins of the… (More)

Boosting is a general method for improving the accuracy of any given learning algorithm. This short overview paper introduces the boosting algorithm AdaBoost, and explains the underlying theory of boosting, including an explanation of why boosting often does not suffer from overfitting as well as boosting’s relationship to support-vector machines. Some… (More)

- Yoav Freund
- COLT
- 1990

- Peter Auer, Nicolò Cesa-Bianchi, Yoav Freund, Robert E. Schapire
- Electronic Colloquium on Computational Complexity
- 1995

In the multi-armed bandit problem, a gambler must decide which arm of K non-identical slot machines to play in a sequence of trials so as to maximize his reward. This classical problem has received much attention because of the simple model it provides of the trade-off between exploration (trying out each arm to find the best one) and exploitation (playing… (More)

- Yoav Freund, H. Sebastian Seung, Eli Shamir, Naftali Tishby
- Machine Learning
- 1997

We analyze the “query by committee” algorithm, a method for filtering informative queries from a random stream of inputs. We show that if the two-member committee algorithm achieves information gain with positive lower bound, then the prediction error decreases exponentially with the number of queries. We show that, in particular, this exponential decrease… (More)

- Yoav Freund, Llew Mason
- ICML
- 1999

The application of boosting procedures to de cision tree algorithms has been shown to pro duce very accurate classi ers These classi ers are in the form of a majority vote over a number of decision trees Unfortunately these classi ers are often large complex and di cult to interpret This paper describes a new type of classi cation rule the alternat ing… (More)

We analyze algorithms that predict a binary value by combining the predictions of several prediction strategies, called <italic>experts</italic>. Our analysis is for worst-case situations, i.e., we make no assumptions about the way the sequence of bits to be predicted is generated. We measure the performance of the algorithm by the difference between the… (More)