Corpus ID: 116539307

Generalizations of the Bias/Variance Decomposition for Prediction Error

@inproceedings{James1997GeneralizationsOT,
  title={Generalizations of the Bias/Variance Decomposition for Prediction Error},
  author={Gareth M. James and Trevor J. Hastie},
  year={1997}
}
The bias and variance of a real valued random variable, using squared error loss, are well understood. However because of recent developments in classiication techniques it has become desirable to extend these concepts to general random variables and loss functions. The 0-1 (misclassiication) loss function with categorical random variables has been of particular interest. We explore the concepts of variance and bias and develop a decomposition of the prediction error into functions of the… Expand
General bias/variance decomposition with target independent variance of error functions derived from the exponential family of distributions
  • J. Hansen, T. Heskes
  • Computer Science
  • Proceedings 15th International Conference on Pattern Recognition. ICPR-2000
  • 2000
TLDR
It is proved that this family of error functions contains all error functions decomposable in that manner and presents a useful approximation of ambiguity that is quadratic in the ensemble coefficients. Expand
A Unified Bias-Variance Decomposition and its Applications
This paper presents a unified bias-variance decomposition that is applicable to squared loss, zero-one loss, variable misclassification costs, and other loss functions. The unified decompositionExpand
A Unified Bias-Variance Decomposition
The bias-variance decomposition is a very useful and widely-used tool for understanding machine-learning algorithms. It was originally developed for squared loss. In recent years, several authorsExpand
Bias/Variance Decompositions for Likelihood-Based Estimators
  • T. Heskes
  • Computer Science, Medicine
  • Neural Computation
  • 1998
TLDR
A similar simple decomposition is derived, valid for any kind of error measure that, when using the appropriate probability model, can be derived from a Kullback-Leibler divergence or log-likelihood. Expand
Bias-Variance Decomposition for model selection
Bias-variance decomposition is known to be a powerful tool when explaining the success of learning methods. By now it’s common practice to analyze newly developed techniques in terms of their biasExpand
Pooling for Combination of Multilevel Forecasts
TLDR
It is shown that previously proposed pooling based only on error variances cannot fully exploit the complementary information present in a set of diverse forecasts to be combined and that covariance values could be reliably calculated and taken into account during the pooling process. Expand
Combination of Multi Level Forecasts
This paper provides a discussion of the effects of different multi-level learning approaches on the resulting out of sample forecast errors in the case of difficult real-world forecasting problemsExpand
Regularization of Portfolio Allocation
The mean-variance optimization (MVO) theory of Markowitz (1952) for portfolio selection is one of the most important methods used in quantitative finance. This portfolio allocation needs two inputExpand
Error coding and PaCT's
TLDR
A new class of plug in classi cation techniques have recently been developed in the statistics literature and some motivation for their success is given. Expand
Error coding and PaCT ' sGareth
A new class of plug in classiication techniques have recently been developed in the statistics literature. A plug in classiication technique (PaCT) is a method that takes a standard classiier (suchExpand
...
1
2
3
...

References

SHOWING 1-5 OF 5 REFERENCES
Bias, Variance and Prediction Error for Classification Rules
TLDR
A decomposition of prediction error into its natural components is developed and a bootstrap estimate of the error of a \bagged" classiier is obtained. Expand
On Bias, Variance, 0/1—Loss, and the Curse-of-Dimensionality
  • J. Friedman
  • Mathematics, Computer Science
  • Data Mining and Knowledge Discovery
  • 2004
TLDR
This work candramatically mitigate the effect of the bias associated with some simpleestimators like “naive” Bayes, and the bias induced by the curse-of-dimensionality on nearest-neighbor procedures. Expand
Error-Correcting Output Coding Corrects Bias and Variance
TLDR
An investigation of why the ECOC technique works, particularly when employed with decision-tree learning algorithms, shows that it can reduce the variance of the learning algorithm. Expand
Solving Multiclass Learning Problems via Error-Correcting Output Codes
TLDR
It is demonstrated that error-correcting output codes provide a general-purpose method for improving the performance of inductive learning programs on multiclass problems. Expand
Experiments with a New Boosting Algorithm
TLDR
This paper describes experiments carried out to assess how well AdaBoost with and without pseudo-loss, performs on real learning problems and compared boosting to Breiman's "bagging" method when used to aggregate various classifiers. Expand