• Corpus ID: 15331432

A game-theoretic framework for classifier ensembles using weighted majority voting with local accuracy estimates

@article{Georgiou2013AGF,
  title={A game-theoretic framework for classifier ensembles using weighted majority voting with local accuracy estimates},
  author={Harris V. Georgiou and Michael E. Mavroforakis},
  journal={ArXiv},
  year={2013},
  volume={abs/1302.0540}
}
In this paper, a novel approach for the optimal combination of binary classifiers is proposed. The classifier combination problem is approached from a Game Theory perspective. The proposed framework of adapted weighted majority rules (WMR) is tested against common rank-based, Bayesian and simple majority models, as well as two soft-output averaging rules. Experiments with ensembles of Support Vector Machines (SVM), Ordinary Binary Tree Classifiers (OBTC) and weighted k-nearest-neighbor (w/k-NN… 

Figures and Tables from this paper

Collective decision efficiency and optimal voting mechanisms: A comprehensive overview for multi-classifier models
TLDR
A new game-theoretic approach for combining multiple classifiers is proposed, and a fully adaptive version of WMRs is proposed as a statistically invariant way of adjusting the design process of the optimal WMR to the arbitrary non-symmetrical properties of the underlying feature space.
Game theory models for spectral band grouping and classifier ensembles for hyperspectral image classification
  • L. Bruce
  • Environmental Science
    2014 6th Workshop on Hyperspectral Image and Signal Processing: Evolution in Remote Sensing (WHISPERS)
  • 2014
TLDR
The author incorporates each of the proposed approaches, (i) and (ii), into a multi-classifier decision fusion (MCDF) system for automated ground cover classification with hyperspectral imagery, presenting significant improvements over existing methods.
Games People Play Conflicts, mechanisms and collective decision-making in expert committees
In this paper, a gentle introduction to Game Theory is presented in the form of basic concepts and examples. Minimax and Nash's theorem are introduced as the formal de nitions for optimal strategies
Games people play: An overview of strategic decision-making theory in conflict situations
Last updated: June 15, 2015 Abstract — In this paper, a gentle introduction to Game Theory is presented in the form of basic concepts and examples. Minimax and Nash's theorem are introduced as the
Game theory based data fusion for precision agriculture applications
  • L. Bruce, Daniel Reynolds
  • Environmental Science
    2016 IEEE International Geoscience and Remote Sensing Symposium (IGARSS)
  • 2016
TLDR
The author incorporates each of the proposed approaches into a multi-classifier decision fusion (MCDF) system for automated ground cover classification of hyperspectral imagery collected via an unmanned airborne system (UAS) over multiple dates.
Elements of Game Theory - Part I: Foundations, acts and mechanisms
In this paper, a gentle introduction to Game Theory is presented in the form of basic concepts and examples. Minimax and Nash's theorem are introduced as the formal definitions for optimal strategies
SV000gg at SemEval-2016 Task 11: Heavy Gauge Complex Word Identification with System Voting
TLDR
The SV000gg systems are introduced: two Ensemble Methods for the Complex Word Identification task of SemEval 2016, which employs Performance-Oriented Soft Voting, which weights votes according to the voter's performance rather than its prediction confidence, allowing for completely heterogeneous systems to be combined.
A Combined Offline and Online Algorithm for Real-Time and Long-Term Classification of Sheep Behaviour: Novel Approach for Precision Livestock Farming
TLDR
This study presents a combined offline algorithm and online learning algorithm which deals with concept drift and is deemed by the authors as a useful mechanism for long-term in-the-field monitoring systems.

References

SHOWING 1-10 OF 80 REFERENCES
A Game-Theoretic Approach to Weighted Majority Voting for Combining SVM Classifiers
TLDR
Experimental results with combined support vector machine (SVM) classifiers on benchmark classification tasks have proven that WMR, employing the theoretically optimal solution for combination weights proposed in this work, outperformed all the other rank-based, simple majority and soft-output averaging methods.
The Random Subspace Method for Constructing Decision Forests
  • T. Ho
  • Computer Science
    IEEE Trans. Pattern Anal. Mach. Intell.
  • 1998
TLDR
A method to construct a decision tree based classifier is proposed that maintains highest accuracy on training data and improves on generalization accuracy as it grows in complexity.
"Fuzzy" versus "nonfuzzy" in combining classifiers designed by Boosting
TLDR
In the authors' experiments, the fuzzy combination methods performed consistently better than the nonfuzzy methods, and the weighted majority vote showed a stable performance, though slightly inferior to the performance of the fuzzy combiners.
Confidence-based classifier design
Transforming classifier scores into accurate multiclass probability estimates
TLDR
This work shows how to obtain accurate probability estimates for multiclass problems by combining calibrated binary probability estimates, and proposes a new method for obtaining calibrated two-class probability estimates that can be applied to any classifier that produces a ranking of examples.
The Network of Weighted Majority Rules and Weighted Majority Games
Abstract The network organizes in space the weighted majority rules (WMR) and weighted majority games (WMG). The WMRs are the potentially optimal decision rules in uncertain dichotomous choice
Adaptive Selection of Image Classifiers
TLDR
It is pointed out that adaptive selection does not require the assumption of uncorrelated errors, thus simplifying the choice of classifiers forming a Multiple Classifier System.
Rotation Forest: A New Classifier Ensemble Method
TLDR
This work examined the rotation forest ensemble on a random selection of 33 benchmark data sets from the UCI repository and compared it with bagging, AdaBoost, and random forest and prompted an investigation into diversity-accuracy landscape of the ensemble models.
Combining classifiers
We develop a common theoretical framework for combining classifiers which use distinct pattern representations and show that many existing schemes can be considered as special cases of compound
Leave One Out Error, Stability, and Generalization of Voting Combinations of Classifiers
TLDR
Novel bounds on the stability of combinations of any classifiers are derived that can be used to formally show that, for example, bagging increases the Stability of unstable learning machines.
...
...