Corpus ID: 2354909

Discussion of the Paper \additive Logistic Regression: a Statistical View of Boosting" By

@inproceedings{Friedman2000DiscussionOT,
  title={Discussion of the Paper \additive Logistic Regression: a Statistical View of Boosting" By},
  author={Jerome H. Friedman and Trevor J. Hastie and Robert Tibshirani and Yoav Freund and Robert E. Schapire},
  year={2000}
}
The main and important contribution of this paper is in establishing a connection between boosting, a newcomer to the statistics scene, and additive models. One of the main properties of boosting that has made it interesting to statisticians and others is its relative (but not complete) immunity to overrtting. As pointed out by the authors, the current paper does not address this issue. Leo Breiman 1] tried to explain this behaviour in terms of bias and variance. In our paper with Bartlett and… Expand
Boosting as a Regularized Path to a Maximum Margin Classifier
TLDR
It is built on recent work by Efron et al. to show that boosting approximately (and in some cases exactly) minimizes its loss criterion with an l1 constraint on the coefficient vector, and shows that as the constraint is relaxed the solution converges (in the separable case) to an "l1-optimal" separating hyper-plane. Expand
Different Paradigms for Choosing Sequential Reweighting Algorithms
  • G. Blanchard
  • Mathematics, Computer Science
  • Neural Computation
  • 2004
TLDR
A very simple family of iterative reweighting algorithms that can be understood as different trade-offs between the two paradigms are derived and argued that this can allow for a suitable adaptivity to different classification problems, particularly in the presence of noise or excessive complexity of the base classifiers. Expand
Improving Boosting by Exploiting Former Assumptions
TLDR
This study proposes a new approach and modifications carried out on the algorithm of AdaBoost, called hybrid approach, and demonstrates that it is possible to improve the performance of the Boosting, by exploiting assumptions generated with the former iterations to correct the weights of the examples. Expand
Response to Mease and Wyner, Evidence Contrary to the Statistical View of Boosting, JMLR 9:131-156, 2008
For such a simple algorithm, it is fascinating and remarkable what a rich diversity of interpretations, views, perspectives and explanations have emerged of AdaBoost. Originally, AdaBoost wasExpand
The Fast Convergence of Boosting
TLDR
This manuscript considers the convergence rate of boosting under a large class of losses, including the exponential and logistic losses, where the best previous rate of convergence was O(exp(1/∊2); the principal technical hurdle throughout this work is the potential unattainability of the infimal empirical risk. Expand
Supervised projection approach for boosting classifiers
TLDR
A new approach for boosting methods for the construction of ensembles of classifiers, based on using the distribution given by the weighting scheme of boosting to construct a non-linear supervised projection of the original variables, instead of using the weights of the instances to train the next classifier. Expand
Boosting with the L 2-Loss : Regression and Classi cationPeter
This paper investigates a computationally simple variant of boosting, L 2 Boost, which is constructed from a functional gradient descent algorithm with the L 2-loss function. As other boostingExpand
Boosting and Support Vector Machines as Optimal Separators
TLDR
It is shown that boosting approximately (and in some cases exactly) minimizes its loss criterion with an L1 constraint and that as the constraint diminishes, or equivalently as the boosting iterations proceed, the solution converges in the separable case to an “L1-optimal” separating hyper-plane. Expand
Further results on the margin explanation of boosting: new algorithm and experiments
TLDR
An efficient algorithm is developed that, given a boosting classifier, learns a new voting classifier which usually has a smaller Emargin bound, and finds that the new classifier often has smaller test errors, which agrees with what the EmargIn theory predicts. Expand
A Refined Margin Analysis for Boosting Algorithms via Equilibrium Margin
TLDR
A refined analysis of the margin theory is made, which proves a bound in terms of a new margin measure called the Equilibrium margin (Emargin) which is uniformly sharper than Breiman's minimum margin bound. Expand
...
1
2
3
4
5
...

References

SHOWING 1-6 OF 6 REFERENCES
Boosting the margin: A new explanation for the effectiveness of voting methods
TLDR
It is shown that techniques used in the analysis of Vapnik's support vector classifiers and of neural networks with small weights can be applied to voting methods to relate the margin distribution to the test error. Expand
The Alternating Decision Tree Learning Algorithm
TLDR
A new type of classi cation rule, the alternating decision tree, which is a generalization of decision trees, voted decision trees and voted decision stumps and generates rules that are usually smaller in size and thus easier to interpret. Expand
An Adaptive Version of the Boost by Majority Algorithm
  • Y. Freund
  • Mathematics, Computer Science
  • COLT '99
  • 1999
TLDR
The paper describes two methods for finding approximate solutions to the differential equations and a method that results in a provably polynomial time algorithm based on the Newton-Raphson minimization procedure, which is much more efficient in practice but is not known to bePolynomial. Expand
Arcing classi ers
  • The Annals of Statistics,
  • 1998
Arcing classiiers. The Annals of Statistics
  • Arcing classiiers. The Annals of Statistics
  • 1998
Boosting the margin: A new explanation for the eeectiveness of voting methods The Annals of Statistics
  • Machine Learning: Proceedings of the Fourteenth International Conference
  • 1997