#### Filter Results:

- Full text PDF available (167)

#### Publication Year

1987

2017

- This year (4)
- Last 5 years (35)
- Last 10 years (70)

#### Publication Type

#### Co-author

#### Journals and Conferences

#### Data Set Used

#### Key Phrases

Learn More

- Yoav Freund, Robert E. Schapire
- EuroCOLT
- 1995

In the rst part of the paper we consider the problem of dynamically apportioning resources among a set of options in a worst-case on-line framework. The model we study can be interpreted as a broad, abstract extension of the well-studied on-line prediction model to a general decision-theoretic setting. We show that the multiplicative weight-update rule of… (More)

- Yoav Freund, Robert E. Schapire
- ICML
- 1996

In an earlier paper, we introduced a new " boosting " algorithm called AdaBoost which, theoretically, can be used to significantly reduce the error of any learning algorithm that consistently generates classifiers whose performance is a little better than random guessing. We also introduced the related notion of a " pseudo-loss " which is a method for… (More)

- Yoav Freund, Raj D. Iyer, Robert E. Schapire, Yoram Singer
- Journal of Machine Learning Research
- 1998

We study the problem of learning to accurately rank a set of objects by combining a given collection of ranking or preference functions. This problem of combining preferences arises in several applications, such as that of combining the results of different search engines, or the " collaborative-filtering " problem of ranking movies for a user based on the… (More)

- Peter Auer, Nicolò Cesa-Bianchi, Yoav Freund, Robert E. Schapire
- SIAM J. Comput.
- 2002

In the multiarmed bandit problem, a gambler must decide which arm of K non-identical slot machines to play in a sequence of trials so as to maximize his reward. This classical problem has received much attention because of the simple model it provides of the trade-off between exploration (trying out each arm to find the best one) and exploitation (playing… (More)

- Robert E. Schapire, Yoram Singer
- Machine Learning
- 2000

This work focuses on algorithms which learn from examples to perform multiclass text and speech categorization tasks. Our approach is based on a new and improved family of boosting algorithms. We describe in detail an implementation, called BoosTexter, of the new boosting algorithms for text categorization tasks. We present results comparing the performance… (More)

- Jane Elith, Catherine H. Graham, +24 authors Niklaus E. Zimmermann
- 2006

Novel methods improve prediction of species' distributions from occurrence data. Á/ Ecography 29: 129 Á/151. Prediction of species' distributions is central to diverse applications in ecology, evolution and conservation science. There is increasing electronic access to vast sets of occurrence records in museums and herbaria, yet little effective guidance on… (More)

- Robert E. Schapire, Yoav Freund, Peter Barlett, Wee Sun Lee
- ICML
- 1997

One of the surprising recurring phenomena observed in experiments with boosting is that the test error of the generated classifier usually does not increase as its size becomes very large, and often is observed to decrease even after the training error reaches zero. In this paper, we show that this phenomenon is related to the distribution of margins of the… (More)

- Erin L. Allwein, Robert E. Schapire, Yoram Singer
- Journal of Machine Learning Research
- 2000

We present a unifying framework for studying the solution of multiclass categorization problems by reducing them to multiple binary problems that are then solved using a margin-based binary learning algorithm. The proposed framework unifies some of the most popular approaches in which each class is compared against all others, or in which all pairs of… (More)

We study the problem of modeling species geographic distributions, a critical problem in conservation biology. We propose the use of maximum-entropy techniques for this problem, specifically, sequential-update algorithms that can handle a very large number of features. We describe experiments comparing maxent with a standard distribution-modeling tool,… (More)

Boosting is a general method for improving the accuracy of any given learning algorithm. This short overview paper introduces the boosting algorithm AdaBoost, and explains the underlying theory of boosting, including an explanation of why boosting often does not suffer from overfitting as well as boosting's relationship to support-vector machines. Some… (More)