Minimum Viable Model Estimates for Machine Learning Projects

@article{Hawkins2021MinimumVM,
  title={Minimum Viable Model Estimates for Machine Learning Projects},
  author={John Hawkins},
  journal={ArXiv},
  year={2021},
  volume={abs/2101.00346}
}
Prioritization of machine learning projects requires estimates of both the potential ROI of the business case and the technical difficulty of building a model with the required characteristics. In this work we present a technique for estimating the minimum required performance characteristics of a predictive model given a set of information about how it will be used. This technique will result in robust, objective comparisons between potential projects. The resulting estimates will allow data… 
1 Citations

Figures and Tables from this paper

MinViME/Minimum Viable Model Estimator

References

SHOWING 1-10 OF 12 REFERENCES
Guidelines for assessing the value of a predictive algorithm: a case study
TLDR
A case study is presented, where a machine-learning algorithm is used for bid qualification and it is shown how to apply classification matrices for business value assessment and proposed guidelines and metrics for interpreting the impact in practical solutions.
MetaCost: a general method for making classifiers cost-sensitive
TLDR
A principled method for making an arbitrary classifier cost-sensitive by wrapping a cost-minimizing procedure around it is proposed, called MetaCost, which treats the underlying classifier as a black box, requiring no knowledge of its functioning or change to it.
The Foundations of Cost-Sensitive Learning
TLDR
It is argued that changing the balance of negative and positive training examples has little effect on the classifiers produced by standard Bayesian and decision tree learning methods, and the recommended way of applying one of these methods is to learn a classifier from the training set and then to compute optimal decisions explicitly using the probability estimates given by the classifier.
Optimal threshold estimation for binary classifiers using game theory.
TLDR
It is argued that considering a classifier as a player in a zero-sum game allows the minimax principle from game theory to determine the optimal operating point and reveals that the empirical condition for threshold estimation of "specificity equals sensitivity" maximizes robustness against uncertainties in the abundance of positives in nature and classification costs.
Some thoughts about the design of loss functions
TLDR
There is no need to stick to standard loss functions when computational methods like cross-validation are applied, and the main message is that the choice of a loss function in a practical situation is the translation of an informal aim or interest that a researcher may have into the formal language of mathematics.
The use of the area under the ROC curve in the evaluation of machine learning algorithms
Cost-sensitive boosting algorithms: Do we really need them?
TLDR
This work critique the Boosting literature using four theoretical frameworks: Bayesian decision theory, the functional gradient descent view, margin theory, and probabilistic modelling, finding that only three algorithms are fully supported.
Medicare fraud detection using neural networks
TLDR
This is the first study to compare multiple data-level and algorithm-level deep learning methods across a range of class distributions and a unique analysis of the relationship between minority class size and optimal decision threshold and state-of-the-art performance on the given Medicare fraud detection task.
Enhanced Classification Model for Cervical Cancer Dataset based on Cost Sensitive Classifier Hayder
TLDR
The proposed model presents a cost sensitive classifiers that has three main stages; the first stage is prepressing the original data to prepare it for classification model which is build based on decision tree classifier with cost selectivity and finally evaluation the proposed model based on many metrics in addition to apply a cross validation.
Small Sample Size Effects in Statistical Pattern Recognition: Recommendations for Practitioners
TLDR
The effects of sample size on feature selection and error estimation for several types of classifiers are discussed and an emphasis is placed on giving practical advice to designers and users of statistical pattern recognition systems.
...
...