Predictive Multiplicity in Probabilistic Classification

@article{WatsonDaniels2022PredictiveMI,
  title={Predictive Multiplicity in Probabilistic Classification},
  author={Jamelle Watson-Daniels and David C. Parkes and Berk Ustun},
  journal={ArXiv},
  year={2022},
  volume={abs/2206.01131}
}
There may exist multiple models that perform almost equally well for any given prediction task. We examine how predictions change across these competing models. In particular, we study predictive multiplicity – in probabilistic classification. We formally define measures for our setting and develop optimization-based methods to compute these measures for convex empirical risk minimization problems. We apply our methodology to gain insight into why predictive multiplicity arises. We demonstrate… 

Figures and Tables from this paper

Rank List Sensitivity of Recommender Systems to Interaction Perturbations

A measure of stability for recommender systems, called Rank List Sensitivity (RLS), is introduced, which measures how rank lists generated by a given recommender system at test time change as a result of a perturbation in the training data.

References

SHOWING 1-10 OF 54 REFERENCES

Selective Ensembles for Consistent Predictions

It is proved that that prediction disagreement between selective ensembles is bounded, and empirically demonstrated that selectiveEnsembles achieve consistent predictions and feature attributions while maintaining low abstention rates.

All Models are Wrong, but Many are Useful: Learning a Variable's Importance by Studying an Entire Class of Prediction Models Simultaneously

Model class reliance (MCR) is proposed as the range of VI values across all well-performing model in a prespecified class, which gives a more comprehensive description of importance by accounting for the fact that many prediction models, possibly of different parametric forms, may fit the data well.

On Counterfactual Explanations under Predictive Multiplicity

This work derives a general upper bound for the costs of counterfactual explanations under predictive multiplicity, which depends on a discrepancy notion between two classifiers, which describes how differently they treat negatively predicted individuals.

Accounting for Model Uncertainty in Algorithmic Discrimination

This work proposes scalable convex proxies to come up with classifiers that exhibit predictive multiplicity and empirically shows that these methods are comparable in performance and up to four orders of magnitude faster than the current state of theart.

Optimized Risk Scores

This paper forms a principled approach to learn risk scores that are fully optimized for feature selection, integer coefficients, and operational constraints, and presents a new cutting plane algorithm to efficiently recover its optimal solution.

Underspecification Presents Challenges for Credibility in Modern Machine Learning

This work shows the need to explicitly account for underspecification in modeling pipelines that are intended for real-world deployment in any domain, and shows that this problem appears in a wide variety of practical ML pipelines.

Analyzing the role of model uncertainty for electronic health records

It is shown that RNNs with only Bayesian embeddings can be a more efficient way to capture model uncertainty compared to ensembles, and how model uncertainty is impacted across individual input features and patient subgroups is analyzed.

A tutorial on conformal prediction

This tutorial presents a self-contained account of the theory of conformal prediction and works through several numerical examples of how the model under which successive examples are sampled independently from the same distribution can be applied to any method for producing ŷ.

When Does Uncertainty Matter?: Understanding the Impact of Predictive Uncertainty in ML Assisted Decision Making

This work carries out user studies to systematically assess how people respond to different types of predictive uncertainty i.e., posterior predictive distributions with different shapes and variances, in the context of ML assisted decision making, and demonstrates that uncertainty is an effective tool for persuading humans to agree with model predictions.

Best Subset Selection via a Modern Optimization Lens

It is established via numerical experiments that the MIO approach performs better than {\texttt {Lasso}} and other popularly used sparse learning procedures, in terms of achieving sparse solutions with good predictive power.
...