Rotation Forest: A New Classifier Ensemble Method

@article{RodrguezDiez2006RotationFA,
  title={Rotation Forest: A New Classifier Ensemble Method},
  author={Juan Jos{\'e} Rodr{\'i}guez Diez and Ludmila I. Kuncheva and Carlos J. Alonso},
  journal={IEEE Transactions on Pattern Analysis and Machine Intelligence},
  year={2006},
  volume={28},
  pages={1619-1630}
}
We propose a method for generating classifier ensembles based on feature extraction. [] Key Method To create the training data for a base classifier, the feature set is randomly split into K subsets (K is a parameter of the algorithm) and principal component analysis (PCA) is applied to each subset. All principal components are retained in order to preserve the variability information in the data. Thus, K axis rotations take place to form the new features for a base classifier. The idea of the rotation…
A Novel Approach on Ensemble Classifiers with Fast Rotation Forest Algorithm
TLDR
A novel approach Fast Rotation Forest is introduced to enrich the accuracy rate and encourage simultaneously individual accuracy and specificity within the ensemble of classifier ensembles.
A Novel Approach on Ensemble Classifierswith Fast Rotation Forest Algorithm
TLDR
A novel approach Fast Rotation Forest is introduced to enrich the accuracy rate and encourage simultaneously individual accuracy and specificity within the ensemble of classifier ensembles.
An Experimental Study on Rotation Forest Ensembles
TLDR
A lesion study on Rotation Forest is carried out to find out which of the parameters and the randomization heuristics are responsible for the good performance of the method.
RotaSVM: A New Ensemble Classifier
TLDR
The effectiveness of the RotaSVM is demonstrated quantitatively by comparing it with other widely used ensemble based classifiers such as Bagging, AdaBoost, MultiBoost and Rotation Forest for 10 real-life data sets and a statistical test has been conducted to establish the superiority of the result produced.
A novel method for constructing ensemble classifiers
TLDR
A novel ensemble classifier generation method by integrating the ideas of bootstrap aggregation and Principal Component Analysis (PCA), which performs better than or as well as several other ensemble methods on some benchmark data sets publicly available from the UCI repository.
A novel feature subspace selection method in random forests for high dimensional data
  • Yisen Wang, Shutao Xia
  • Computer Science
    2016 International Joint Conference on Neural Networks (IJCNN)
  • 2016
TLDR
Experimental results demonstrate that the proposed PCA-SS based Random Forests algorithm, named PSRF, significantly improves the performance of random forests when dealing with high dimensional data, compared with the state-of-the-art random forests algorithms.
Selective Ensemble Based on Transformation of Classifiers Used SPCA
TLDR
A new ensemble method is proposed that selecting classifiers to ensemble via the transformation of individual classifiers based on diversity and accuracy obtains the better performance than other methods, and the kappa-error diagrams illustrate that the proposed method enhances the diversity compared against other methods.
Canonical Forest
TLDR
Canonical Forest performed significantly better in accuracy than other ensemble methods in most data sets and according to the investigation on the bias and variance decomposition, the success of Canonical Forest can be attributed to the variance reduction.
Building forests of local trees
Cancer classification using Rotation Forest
...
...

References

SHOWING 1-10 OF 55 REFERENCES
Random Forests
TLDR
Internal estimates monitor error, strength, and correlation and these are used to show the response to increasing the number of features used in the forest, and are also applicable to regression.
Arcing Classifiers
TLDR
Two arcing algorithms are explored, they are compared to each other and to bagging, and the definitions of bias and variance for a classifier as components of the test set error are introduced.
A Comparison of Ensemble Creation Techniques
TLDR
Bagging and six other randomization-based ensemble tree methods are evaluated and it is found that none of them is consistently more accurate than standard bagging when tested for statistical significance.
Reducing Multiclass to Binary: A Unifying Approach for Margin Classifiers
TLDR
A general method for combining the classifiers generated on the binary problems is proposed, and a general empirical multiclass loss bound is proved given the empirical loss of the individual binary learning algorithms.
Arcing classifier (with discussion and a rejoinder by the author)
TLDR
Two arcing algorithms are explored, compared to each other and to bagging, and the definitions of bias and variance for a classifier as components of the test set error are introduced.
Multiple classifier systems : second International Workshop, MCS 2001, Cambridge, UK, July 2-4, 2001 : proceedings
TLDR
This book discusses Boosting, Bagging, and Consensus Based Classification of Multisource Remote Sensing Data, as well as a self-Organising approach to Multiple Classifier Fusion.
Measures of Diversity in Classifier Ensembles and Their Relationship with the Ensemble Accuracy
TLDR
Although there are proven connections between diversity and accuracy in some special cases, the results raise some doubts about the usefulness of diversity measures in building classifier ensembles in real-life pattern recognition problems.
Combining Feature Subsets in Feature Selection
TLDR
This paper studies the efficiency of combining applied on top of feature selection/extraction, and analyzes conditions when combining classifiers on multiple feature subsets is more beneficial than exploiting a single selected feature set.
Diversity in multiple classifier systems
Boosting with Averaged Weight Vectors
  • N. Oza
  • Computer Science
    Multiple Classifier Systems
  • 2003
TLDR
This work presents an algorithm that attempts to come as close as possible to the goal of making the next base model's errors uncorrelated with those of the previous model in an efficient manner.
...
...