On Reject and Refine Options in Multicategory Classification

@article{Zhang2017OnRA,
  title={On Reject and Refine Options in Multicategory Classification},
  author={Chong Zhang and Wenbo Wang and Xingye Qiao},
  journal={Journal of the American Statistical Association},
  year={2017},
  volume={113},
  pages={730 - 745}
}
ABSTRACT In many real applications of statistical learning, a decision made from misclassification can be too costly to afford; in this case, a reject option, which defers the decision until further investigation is conducted, is often preferred. In recent years, there has been much development for binary classification with a reject option. Yet, little progress has been made for the multicategory case. In this article, we propose margin-based multicategory classification methods with a reject… 
Consistent algorithms for multiclass classification with an abstain option
TLDR
The goal is to design consistent algorithms for such n-class classification problems with a ‘reject option’; while such algorithms are known for the binary (n = 2) case, little has been understood for the general multiclass case.
Classification with Rejection Based on Cost-sensitive Classification
TLDR
A novel method of classification with rejection by learning an ensemble of cost-sensitive classifiers, which satisfies all the following properties for the first time: it can avoid estimating class-posterior probabilities, resulting in improved classification accuracy.
Set-valued classification - overview via a unified framework
TLDR
The present survey aims to review popular formulations of set-valued classification using a unified statistical framework that encompasses previously considered and leads to new formulations as well as it allows to understand underlying trade-offs of each formulation.
Machine Learning with a Reject Option: A survey
TLDR
The conditions leading to two types of rejection, ambiguity and novelty rejection are introduced and the existing architectures for models with a reject option are defined, and the standard learning strategies to train such models are described and relate traditional machine learning techniques to rejection.
Learning Confidence Sets using Support Vector Machines
TLDR
This work proposes a support vector classifier to construct confidence sets in a flexible manner and shows that the proposed learner can control the non-coverage rates and minimize the ambiguity with high probability.
Uncertainty Sets for Image Classifiers using Conformal Prediction
TLDR
An algorithm is presented that modifies any classifier to output a predictive set containing the true label with a user-specified probability, such as 90%, which provides a formal finite-sample coverage guarantee for every model and dataset.
Exploratory Machine Learning with Unknown Unknowns
TLDR
The exploratory machine learning is proposed, which examines and investigates the training dataset by actively augmenting the feature space to discover potentially unknown labels.
U NCERTAINTY S ETS FOR I MAGE C LASSIFIERS USING C ONFORMAL P REDICTION
TLDR
This work presents an algorithm that modifies any classifier to output a predictive set containing the true label with a user-specified probability, such as 90%, and provides a formal finite-sample coverage guarantee for every model and dataset.
Deep Partial Rank Aggregation for Personalized Attributes
TLDR
This paper proposes an end-to-end partial ranking model which consists of a deep backbone architecture and a probabilistic model that captures the generative process of the partial rank annotations and is equipped with an adaptive perception threshold.
Near-optimal Individualized Treatment Recommendations
TLDR
This work proposes two methods to estimate the optimal A-ITR within the outcome weighted learning (OWL) framework and shows the consistency of these methods and obtains an upper bound for the risk between the theoretically optimal recommendation and the estimated one.
...
...

References

SHOWING 1-10 OF 79 REFERENCES
Probability estimation for large-margin classifiers
TLDR
A novel method for estimating the class probability through sequential classifications, by using features of interval estimation of large-margin classifiers, which is highly competitive against alternatives, especially when the dimension of the input greatly exceeds the sample size.
Multicategory large-margin unified machines
TLDR
A new Multicategory LUM (MLUM) framework is proposed to investigate the behavior of soft versus hard classification under multicategory settings and its transition behavior from soft to hard classifiers and the theoretical and numerical results suggest that the proposed tuned MLUM yields very competitive performance.
Classification with a Reject Option using a Hinge Loss
TLDR
This work considers the problem of binary classification where the classifier can, for a particular cost, choose not to classify an observation and proposes a certain convex loss function φ, analogous to the hinge loss used in support vector machines (SVMs).
Growing a multi-class classifier with a reject option
Support Vector Machines with a Reject Option
TLDR
The problem of binary classification where the classifier may abstain instead of classifying each observation is considered, and the double hinge loss function that focuses on estimating conditional probabilities only in the vicinity of the threshold points of the optimal decision rule is derived.
The Error-Reject Tradeoff
TLDR
It is argued that universality in error-reject tradeoff curves for widely differing algorithms classifying handwritten characters is in fact to be expected for general classification problems and extended to classifiers working from finite samples on a broad, albeit limited, class of problems.
Classification Methods with Reject Option Based on Convex Risk Minimization
TLDR
This paper investigates the problem of binary classification with a reject option in which one can withhold the decision of classifying an observation at a cost lower than that of misclassification, and proposes minimizing convex risks based on surrogate convex loss functions.
On L1-Norm Multiclass Support Vector Machines
TLDR
A novel multiclass support vector machine, which performs classification and variable selection simultaneously through an L1-norm penalized sparse representation, and is compared against some competitors in terms of accuracy of prediction.
Multicategory ψ-Learning
TLDR
A novel multicategory generalization of ψ-learning that treats all classes simultaneously and can deliver accurate class prediction and is more robust against extreme observations than its SVM counterpart.
Reject option with multiple thresholds
...
...