Building an Ensemble of Classifiers via Randomized Models of Ensemble Members

@inproceedings{Trajdos2021BuildingAE,
  title={Building an Ensemble of Classifiers via Randomized Models of Ensemble Members},
  author={Pawel Trajdos and Marek Kurzynski},
  booktitle={CORES/IP\&C/ACS},
  year={2021}
}
Many dynamic ensemble selection (DES) methods are known in the literature. A previously-developed by the authors, method consists in building a randomized classifier which is treated as a model of the base classifier. The model is equivalent to the base classifier in a certain probabilistic sense. Next, the probability of correct classification of randomized classifier is taken as the competence of the evaluated classifier. In this paper, a novel randomized model of base classifier is developed… Expand

Tables from this paper

References

SHOWING 1-10 OF 20 REFERENCES
A probabilistic model of classifier competence for dynamic ensemble selection
TLDR
The results obtained indicate that the full vector of class supports should be used for evaluating the classifier competence as this potentially improves performance of MCSs. Expand
A parameter randomization approach for constructing classifier ensembles
TLDR
A novel randomization-based approach for classifier ensemble construction that samples the parameters of the base classifiers from a pre-defined distribution, and analytically approximate the parameter distribution for three well-known classifiers and empirically show that it generates ensembles very similar to Bagging. Expand
A measure of competence based on random classification for dynamic ensemble selection
TLDR
A measure of competence based on random classification (MCR) for classifier ensembles is presented and two MCR based systems developed had typically the highest classification accuracies regardless of the ensemble type used (homogeneous or heterogeneous). Expand
Dynamic classifier selection: Recent advances and perspectives
TLDR
An updated taxonomy of Dynamic Selection techniques is proposed based on the main characteristics found in a dynamic selection system, and an extensive experimental analysis, considering a total of 18 state-of-the-art dynamic selection techniques, as well as static ensemble combination and single classification models. Expand
Dynamic selection of classifiers - A comprehensive review
TLDR
This comprehensive study observed that, for some classification problems, the performance contribution of the dynamic selection approach is statistically significant when compared to that of a single-based classifier and found evidence of a relation between the observed performance contribution and the complexity of the classification problem. Expand
Randomized Reference Classifier with Gaussian Distribution and Soft Confusion Matrix Applied to the Improving Weak Classifiers
TLDR
The results showed that the proposed approach is comparable to the RRC model built using beta distribution, and for some base classifiers, the truncated-normal-based SCM algorithm turned out to be better at discovering objects coming from minority classes. Expand
Statistical Comparisons of Classifiers over Multiple Data Sets
  • J. Demšar
  • Computer Science
  • J. Mach. Learn. Res.
  • 2006
TLDR
A set of simple, yet safe and robust non-parametric tests for statistical comparisons of classifiers is recommended: the Wilcoxon signed ranks test for comparison of two classifiers and the Friedman test with the corresponding post-hoc tests for comparisons of more classifiers over multiple data sets. Expand
From dynamic classifier selection to dynamic ensemble selection
TLDR
This work proposes four new dynamic selection schemes which explore the properties of the oracle concept and suggests that the proposed schemes, using the majority voting rule for combining classifiers, perform better than the static selection method. Expand
A systematic analysis of performance measures for classification tasks
TLDR
This paper presents a systematic analysis of twenty four performance measures used in the complete spectrum of Machine Learning classification tasks, i.e., binary, multi-class,multi-labelled, and hierarchical, to produce a measure invariance taxonomy with respect to all relevant label distribution changes in a classification problem. Expand
Support Vector Classification with Nominal Attributes
TLDR
A new algorithm to deal with nominal attributes in Support Vector Classification by modifying the most popular approach, which overcomes the shortcoming in the mostpopular approach which assume that any two different attribute values have the same degree of dissimilarities. Expand
...
1
2
...