# Dynamic Ensemble Selection with Probabilistic Classifier Chains

@inproceedings{Narassiguin2017DynamicES,
title={Dynamic Ensemble Selection with Probabilistic Classifier Chains},
author={Anil Narassiguin and Haytham Elghazel and Alex Aussem},
booktitle={ECML/PKDD},
year={2017}
}
• Published in ECML/PKDD 18 September 2017
• Computer Science
Dynamic ensemble selection (DES) is the problem of finding, given an input $$\mathbf{x }$$, a subset of models among the ensemble that achieves the best possible prediction accuracy. Recent studies have reformulated the DES problem as a multi-label classification problem and promising performance gains have been reported. However, their approaches may converge to an incorrect, and hence suboptimal, solution as they don’t optimize the true - but non standard - loss function directly. In this…

### Classifier Chains: A Review and Perspectives

• Computer Science
J. Artif. Intell. Res.
• 2021
The goal of this work is to provide a review of classifier chains, a survey of the techniques and extensions provided in the literature, as well as perspectives for this approach in the domain of multi-label classification in the future.

### A new correlation-based approach for ensemble selection in random forests

• Computer Science
Int. J. Intell. Comput. Cybern.
• 2021
The results showed that the proposed pruning method CES (correlation-based Ensemble Selection) selects a small ensemble in a smaller amount of time while improving classification rates compared to the state-of-the-art methods.

### Streaming Time Series Forecasting using Multi-Target Regression with Dynamic Ensemble Selection

• Computer Science
2020 IEEE International Conference on Big Data (Big Data)
• 2020
This work is the first to use Incremental MTR for learning the behavior of each component in an ensemble of forecasters on data streams, and it is shown that explicitly considering models' dependencies improves overall performance.

### Metalearning in Ensemble Methods

• Computer Science
Metalearning
• 2022
This chapter discusses some approaches that exploit metalearning methods in ensemble learning, which seek an ensemble-based solution for the whole dataset, others for individual instances.

### Leveraging Metalearning for Bagging Classifiers

This thesis focuses on how to use metalearning (MtL), the field of ML that studies how learning can be used to solve learning problems, to automate and improve the performance of bagging, one of the most popular EL algorithms.

### Improving binary classification using filtering based on k-NN proximity graphs

• Computer Science
Journal of Big Data
• 2020
Modifications to the DES-LA combiner are introduced, as well as comparative analysis of filtering impact on the classifiers of various type, to confirm the efficiency of automatic filtering approach.

### An information entropy based splitting criterion better for the Data Mining Decision Tree algorithms

This paper attempts to discover a better entropy-based splitting criterion for the induction of the Decision Trees (DT), and highlights the very good performances achieved by the square information gain ratio1 splitting criterion.

### A Semantics Aware Random Forest for Text Classification

• Computer Science
CIKM
• 2019
SARF extracts the features used by trees to generate the predictions and selects a subset of the predictions for which the features are relevant to the predicted classes and evaluated its classification performance on real-world text datasets and assessed its competitiveness with state-of-the-art ensemble selection methods.

### Stochastic ensemble pruning method via simulated quenching walking

• Computer Science
Int. J. Mach. Learn. Cybern.
• 2019
The main objective is to construct a proper architecture of ensemble pruning, which is independent of ensemble construction and combination phases, and to give values to stochastic movements as well as to accept unvalued solutions during the investigation of search spaces.

## References

SHOWING 1-10 OF 30 REFERENCES

### Dynamic ensemble pruning based on multi-label classification

• Computer Science, Environmental Science
Neurocomputing
• 2015

### CHADE: Metalearning with Classifier Chains for Dynamic Combination of Classifiers

• Computer Science
ECML/PKDD
• 2016
This work solves the multi-label classification problem by using a widely known technique: Classifier Chains CC and extends a typical metalearning approach by combining metafeatures characterizing the interdependencies between the classifiers with the base-level features.

### Classifier chains for multi-label classification

• Computer Science
Machine Learning
• 2011
This paper presents a novel classifier chains method that can model label correlations while maintaining acceptable computational complexity, and illustrates the competitiveness of the chaining method against related and state-of-the-art methods, both in terms of predictive performance and time complexity.

### Instance-Based Ensemble Pruning via Multi-Label Classification

• Computer Science
2010 22nd IEEE International Conference on Tools with Artificial Intelligence
• 2010
This paper proposes modeling this task as a multi-label learning problem, where a different subset of the ensemble may be used for each different unclassified instance of an ensemble, to take advantage of the recent advances in this area for the construction of effective ensemble pruning approaches.

### An Analysis of Ensemble Pruning Techniques Based on Ordered Aggregation

• Computer Science
IEEE Transactions on Pattern Analysis and Machine Intelligence
• 2009
The results of this empirical investigation show that ordered aggregation can be used for the efficient generation of pruned ensembles that are competitive, in terms of performance and robustness of classification, with computationally more costly methods that directly select optimal or near-optimal subensembles.

### Diversity Regularized Ensemble Pruning

• Computer Science
ECML/PKDD
• 2012
A theoretical study on the effect of diversity on the generalization performance of voting in the PAC-learning framework and applies explicit diversity regularization to ensemble pruning, and proposes the Diversity Regularized Ensemble Pruning (DREP) method.

### Optimizing the F-Measure in Multi-Label Classification: Plug-in Rule Approach versus Structured Loss Minimization

• Computer Science
ICML
• 2013
A novel plug-in rule algorithm is introduced that estimates all parameters required for a Bayes-optimal prediction via a set of multinomial regression models, and this algorithm is compared with SSVMs in terms of computational complexity and statistical consistency.