Corpus ID: 14145523

FloatBoost Learning for Classification

@inproceedings{Li2002FloatBoostLF,
  title={FloatBoost Learning for Classification},
  author={S. Li and ZhenQiu Zhang and Harry Shum and HongJiang Zhang},
  booktitle={NIPS},
  year={2002}
}
AdaBoost [3] minimizes an upper error bound which is an exponential function of the margin on the training set [14]. However, the ultimate goal in applications of pattern classification is always minimum error rate. On the other hand, AdaBoost needs an effective procedure for learning weak classifiers, which by itself is difficult especially for high dimensional data. In this paper, we present a novel procedure, called FloatBoost, for learning a better boosted classifier. FloatBoost uses a… Expand
FloatBoost learning and statistical face detection
  • S. Li, ZhenQiu Zhang
  • Computer Science, Medicine
  • IEEE Transactions on Pattern Analysis and Machine Intelligence
  • 2004
TLDR
Applied to face detection, the FloatBoost learning method, together with a proposed detector pyramid architecture, leads to the first real-time multiview face detection system reported. Expand
Gated classifiers: Boosting under high intra-class variation
TLDR
This paper addresses the problem of using boosting to classify a target class with significant intra-class variation against a large background class, and suggests a family of derived weak classifier, termed gated classifiers, that suppress such combinations of weak classifiers. Expand
Adatree 2 : Boosting to build decision trees or Improving Adatree with soft splitting rules
We extend the framework of Adaboost so that it builds a smooth ed decision tree rather than a neural network. The proposed method, “Adatree 2”, is derived from the assumption of a probabilisticExpand
Linear Asymmetric Classifier for cascade detectors
TLDR
Experimental results on face detection show that LAC can improve the detection performance in comparison to standard methods, and it is shown that Fisher Discriminant Analysis on the features selected by AdaBoost yields better performance than AdaBoost itself. Expand
Fast Asymmetric Learning for Cascade Face Detection
TLDR
A linear asymmetric classifier (LAC) is presented, a classifier that explicitly handles the asymmetric learning goal as a well-defined constrained optimization problem and is demonstrated experimentally that LAC results in an improved ensemble classifier performance. Expand
Face Detection and Recognizing Object Category in Boosting Framework Using Genetic Algorithms
In this paper we represent the images as a collection of patches, each of which belongs to latent theme that is shared across images as well as categories (1). Various face detection techniques haveExpand
Multi-class object recognition using boosted linear discriminant analysis combined with masking covariance matrix method
  • M. Tanigawa
  • Computer Science
  • Fourth IEEE International Conference on Computer Vision Systems (ICVS'06)
  • 2006
TLDR
A new algorithm, boosted linear discriminant analysis (bLDA), for classification of a non-linear pattern distribution, and masking covariance matrix method (MCM) to find optimal local features, instead of traditional exhaustive search in huge number of candidates feature, for robust and realtime object recognition. Expand
Comprehensive Evolution and Evaluation of Boosting
TLDR
This paper contains comprehensive evolution of Boosting and evaluation ofboosting on various criteria (parameters) with Bagging. Expand
A pattern classification approach for boosting with genetic algorithms
  • I. Yalabik, T. Fatos
  • Computer Science
  • 2007 22nd international symposium on computer and information sciences
  • 2007
TLDR
A novel boosting technique targeting to solve partial problems of AdaBoost, a well-known boosting algorithm, is proposed, and empirical results show that classification with boosted evolutionary computing outperforms the classical AdaBoost in equivalent experimental environments. Expand
Floatcascade learning for fast imbalanced web mining
TLDR
FloatCascade selects fewer yet more effective features at each stage of the cascade classifier in imbalanced web mining for fast classification and proposes a novel asymmetric cascade learning method called FloatCascade to improve the accuracy. Expand
...
1
2
3
4
5
...

References

SHOWING 1-10 OF 23 REFERENCES
Statistical Learning of Multi-view Face Detection
TLDR
FloatBoost incorporates the idea of Floating Search into AdaBoost to solve the non-monotonicity problem encountered in the sequential search of AdaBoost and leads to the first real-time multi-view face detection system in the world. Expand
Fast and Robust Classification using Asymmetric AdaBoost and a Detector Cascade
TLDR
A new variant of AdaBoost is proposed as a mechanism for training the simple classifiers used in the cascade in domains where the distribution of positive and negative examples is highly skewed (e.g. face detection or database retrieval). Expand
Arcing Classifiers
Recent work has shown that combining multiple versions of unstable classifiers such as trees or neural nets results in reduced test set error. One of the more effective is bagging (Breiman [1996a])Expand
Training support vector machines: an application to face detection
  • E. Osuna, R. Freund, F. Girosi
  • Mathematics, Computer Science
  • Proceedings of IEEE Computer Society Conference on Computer Vision and Pattern Recognition
  • 1997
TLDR
A decomposition algorithm that guarantees global optimality, and can be used to train SVM's over very large data sets is presented, and the feasibility of the approach on a face detection problem that involves a data set of 50,000 data points is demonstrated. Expand
Improved Boosting Algorithms Using Confidence-rated Predictions
We describe several improvements to Freund and Schapire's AdaBoost boosting algorithm, particularly in a setting in which hypotheses may assign confidences to each of their predictions. We give aExpand
Special Invited Paper-Additive logistic regression: A statistical view of boosting
Boosting is one of the most important recent developments in classification methodology. Boosting works by sequentially applying a classification algorithm to reweighted versions of the training dataExpand
Rapid object detection using a boosted cascade of simple features
  • Paul A. Viola, Michael J. Jones
  • Computer Science
  • Proceedings of the 2001 IEEE Computer Society Conference on Computer Vision and Pattern Recognition. CVPR 2001
  • 2001
TLDR
A machine learning approach for visual object detection which is capable of processing images extremely rapidly and achieving high detection rates and the introduction of a new image representation called the "integral image" which allows the features used by the detector to be computed very quickly. Expand
Boosting the margin: A new explanation for the effectiveness of voting methods
TLDR
It is shown that techniques used in the analysis of Vapnik's support vector classifiers and of neural networks with small weights can be applied to voting methods to relate the margin distribution to the test error. Expand
A decision-theoretic generalization of on-line learning and an application to boosting
TLDR
The model studied can be interpreted as a broad, abstract extension of the well-studied on-line prediction model to a general decision-theoretic setting, and the multiplicative weightupdate Littlestone Warmuth rule can be adapted to this model, yielding bounds that are slightly weaker in some cases, but applicable to a considerably more general class of learning problems. Expand
Robust Real-Time Face Detection
TLDR
A new image representation called the “Integral Image” is introduced which allows the features used by the detector to be computed very quickly and a method for combining classifiers in a “cascade” which allows background regions of the image to be quickly discarded while spending more computation on promising face-like regions. Expand
...
1
2
3
...