Using Conflicts Among Multiple Base Classifiers to Measure the Performance of Stacking

  title={Using Conflicts Among Multiple Base Classifiers to Measure the Performance of Stacking},
  author={Wei Fan and Philip K. Chan},
We analyze the machine learning bias of stacking and point out the conflict problem. Conflicts are defined as base data with different class labels that produced the same predictions by a set of base classifiers. Based on conflicts, we propose conflict-based accuracy estimate to determine the overall accuracy of a stacked classifier and conflict-based accuracy improvement estimate to determine the overall accuracy improvement over base classifiers. We discuss some popular metrics for comparing… CONTINUE READING


Publications referenced by this paper.
Showing 1-10 of 10 references

An Extensible Meta-learning Approach for Scalable and Accurate Inductive Learning

P. Chan
Ph.D. Thesis, • 1996
View 8 Excerpts
Highly Influenced

Chan Credit Card Fraud Detection Using Metalearning : Issues and Initial Results

D. Fan S. Stolfo, W. Lee, A. Prodromidis, P.
Neural Networks • 1997

Java Agents for Metalearning over Distributed Databases

S. Stolfo, A. Prodromidis, +3 authors P. Chan JAM
Prod . Third Intl . Conf . Knowledge Discovery and Data Mining • 1997

Introduction to IND and Recursive Partitioning , NASA Ames Research Center

R. Caruana

Clemen Combining Forcasts : A Review and Annotated Bibliography

T. R.
International Journal of Forcasting • 1993

A First Course in Numerical Analysis McGrawl

A. Ralston, P. Rabinowitz
View 2 Excerpts

Jacob Hierarchical Mixtures of Experts and the EM Algorithm

A. R.
Neural Computation

Similar Papers

Loading similar papers…