Robust Probabilistic Calibration

@inproceedings{Rping2006RobustPC,
  title={Robust Probabilistic Calibration},
  author={Stefan R{\"u}ping},
  booktitle={ECML},
  year={2006}
}
  • S. Rüping
  • Published in ECML 18 September 2006
  • Computer Science
Probabilistic calibration is the task of producing reliable estimates of the conditional class probability P(class | observation) from the outputs of numerical classifiers. A recent comparative study [1] revealed that Isotonic Regression [2] and Platt Calibration [3] are most effective probabilistic calibration technique for a wide range of classifiers. This paper will demonstrate that these methods are sensitive to outliers in the data. An improved calibration method will be introduced that… 

Reliable Calibrated Probability Estimation in Classification

TLDR
This paper proposes an improvement of the calibration with isotonic regression and binning method by using bootstrapping technique, named bootisotonic regressions and boot-binning, respectively, and shows that the new method outperforms the basic isotonic regressors and binners methods in most configurations.

Probability Calibration Trees

TLDR
This work proposes probability calibration trees, a modification of logistic model trees that identifies regions of the input space in which different probability calibration models are learned to improve performance.

Obtaining Accurate Probabilities Using Classifier Calibration.

TLDR
A suite of parametric and non-parametric methods for calibrating the output of classification and prediction models and a novel framework to derive calibrated probabilities of causal relationships from observational data that improves the precision and recall of edge predictions.

Probabilistic Novelty Detection With Support Vector Machines

TLDR
The development of a Probabilistic calibration technique for one-class SVMs, such that on-line novelty detection may be performed in a probabilistic manner, and the demonstration of the advantages of the proposed method (in comparison to the conventional one- class SVM methodology) using case studies.

Perplexed Bayes Classifier

TLDR
A modification to the Naive Bayes classification algorithm is proposed which improves the classifier’s posterior probability estimates without affecting its performance, and the resulting classifier is called the Perplexed Bayes classifier.

Threshold Choice Methods: the Missing Link

TLDR
The analysis provides a comprehensive view of performance metrics as well as a systematic approach to loss minimisation, and derives several connections between the aforementioned performance metrics, and highlights the role of calibration in choosing the threshold choice method.

Confidence-Based Feature Acquisition to Minimize Training and Test Costs

TLDR
This work presents Confidence-based Feature Acquisition, a novel supervised learning method for acquiring missing feature values when there is missing data at both training and test time, and finds that CFA’s accuracy is at least as high as the other methods, while incurring significantly lower feature acquisition costs.

Pruning of Rules and Rule Sets

TLDR
Prepruning and post-pruning are two standard techniques for avoiding overfitting in learning algorithms, which deals with it during learning, while post- pruning addresses this problem after an overfitting rule set has been learned.

A unified view of performance metrics: translating threshold choice into expected classification loss

TLDR
This analysis provides a comprehensive view of performance metrics as well as a systematic approach to loss minimisation which can be summarised as follows: given a model, apply the threshold choice methods that correspond with the available information about the operating condition, and compare their expected losses.

Tree-structured multiclass probability estimators

TLDR
It is observed that nested dichotomies systematically produce under-confident predictions, even if the binary classifiers are well calibrated, and especially when the number of classes is high, which means substantial performance gains can be made when probability calibration methods are also applied to the internal models.

References

SHOWING 1-10 OF 11 REFERENCES

Transforming classifier scores into accurate multiclass probability estimates

TLDR
This work shows how to obtain accurate probability estimates for multiclass problems by combining calibrated binary probability estimates, and proposes a new method for obtaining calibrated two-class probability estimates that can be applied to any classifier that produces a ranking of examples.

Predicting good probabilities with supervised learning

We examine the relationship between the predictions made by different learning algorithms and true posterior probabilities. We show that maximum margin methods such as boosted trees and boosted

A Simple Method For Estimating Conditional Probabilities For SVMs

TLDR
Several algorithms are compared which scale the SVM decision function to obtain an estimate of the conditional class probability and a new simple and fast method is derived from theoretical arguments and empirically compared to the existing approaches.

Least Median of Squares Regression

Abstract Classical least squares regression consists of minimizing the sum of the squared residuals. Many authors have produced more robust versions of this estimator by replacing the square by

Statistical Comparisons of Classifiers over Multiple Data Sets

  • J. Demšar
  • Computer Science
    J. Mach. Learn. Res.
  • 2006
TLDR
A set of simple, yet safe and robust non-parametric tests for statistical comparisons of classifiers is recommended: the Wilcoxon signed ranks test for comparison of two classifiers and the Friedman test with the corresponding post-hoc tests for comparisons of more classifiers over multiple data sets.

Robust Regression and Outlier Detection

TLDR
This paper presents the results of a two-year study of the statistical treatment of outliers in the context of one-Dimensional Location and its applications to discrete-time reinforcement learning.

Robust Statistics

The classical books on this subject are Hampel et al. (1986); Huber (1981), with somewhat simpler (but partial) introductions by Rousseeuw & Leroy (1987); Staudte & Sheather (1990). The dates reflect

UCI Repository of machine learning databases

Classification rules in standardized partition spaces