# Exponential Savings in Agnostic Active Learning Through Abstention

@article{Puchkin2022ExponentialSI,
title={Exponential Savings in Agnostic Active Learning Through Abstention},
author={Nikita Puchkin and Nikita Zhivotovskiy},
journal={IEEE Transactions on Information Theory},
year={2022},
volume={68},
pages={4651-4665}
}
• Published 31 January 2021
• Computer Science, Mathematics
• IEEE Transactions on Information Theory
We show that in pool-based active classification without assumptions on the underlying distribution, if the learner is given the power to abstain from some predictions by paying the price marginally smaller than the average loss 1/2 of a random guess, exponential savings in the number of label requests are possible whenever they are possible in the corresponding realizable problem. We extend this result to provide a necessary and sufficient condition for exponential savings in pool-based active…

### Efficient Active Learning with Abstention

• Computer Science
ArXiv
• 2022
The first computationally computationally active learning algorithm with abstention is developed, guaranteed to only abstain on hard examples (where the true label distribution is close to a fair coin), a novel property the authors term “proper abstention” that also leads to a host of other desirable characteristics.

### Classification with abstention but without disparities

• Computer Science
UAI
• 2021
A general purpose classiﬁcation algorithm, which is able to abstain from prediction, while avoiding disparate impact, is built and it is shown that fairness and abstention constraints can be achieved independently from the initial classi ﬁer as long as sufﬂciently many unlabeled data is available.

### Exponential Tail Local Rademacher Complexity Risk Bounds Without the Bernstein Condition

• Computer Science, Mathematics
ArXiv
• 2022
This work builds upon the recent approach to localization via oﬀset Rademacher complexities, for which a general high-probability theory has yet to be established and yields results at least as sharp as those obtainable via the classical theory.

### Active learning algorithm through the lens of rejection arguments

• Computer Science
• 2022
These experiments provide empirical evidence that the use of rejection arguments in the active learning algorithm is beneﬁcial and allows good performance in various statistical situations.

### A Regret-Variance Trade-Off in Online Learning

• Computer Science
• 2022
We consider prediction with expert advice for strongly convex and bounded losses, and investigate trade-offs between regret and “variance” (i.e., squared difference of learner’s predictions and best

## References

SHOWING 1-10 OF 68 REFERENCES

### Agnostic active learning

• Computer Science
J. Comput. Syst. Sci.
• 2009
The first active learning algorithm which works in the presence of arbitrary forms of noise is state and analyzed, and it is shown that A2 achieves an exponential improvement over the usual sample complexity of supervised learning.

### Beyond Disagreement-Based Agnostic Active Learning

• Computer Science
NIPS
• 2014
The solution is based on two novel contributions -- a reduction from consistent active learning to confidence-rated prediction with guaranteed error, and a novelconfidence-rated predictor.

### Active Learning for Classification with Abstention

• Computer Science
2020 IEEE International Symposium on Information Theory (ISIT)
• 2020
An active learning strategy is proposed that constructs a non-uniform partition of the input space and focuses sampling in the regions near the decision boundaries and achieves minimax near-optimality by deriving a matching (modulo poly-logarithmic factors) lower bound.

### A bound on the label complexity of agnostic active learning

General bounds on the number of label requests made by the A2 algorithm proposed by Balcan, Beygelzimer & Langford are derived, which represents the first nontrivial general-purpose upper bound on label complexity in the agnostic PAC model.

### A General Agnostic Active Learning Algorithm

• Computer Science
ISAIM
• 2007
This work presents an agnostic active learning algorithm for any hypothesis class of bounded VC dimension under arbitrary data distributions, using reductions to supervised learning that harness generalization bounds in a simple but subtle manner and provides a fall-back guarantee that bounds the algorithm's label complexity by the agnostic PAC sample complexity.

### Active Learning from Imperfect Labelers

• Computer Science
NIPS
• 2016
This work proposes an algorithm which utilizes abstention responses, and analyzes its statistical consistency and query complexity under fairly natural assumptions on the noise and abstention rate of the labeler.

### Adaptivity to Noise Parameters in Nonparametric Active Learning

• Computer Science
COLT
• 2017
A generic algorithmic strategy for adaptivity to unknown noise smoothness and margin is presented, which achieves optimal rates in many general situations and avoids the need for adaptive confidence sets, resulting in strictly milder distributional requirements.

### Active Learning via Perfect Selective Classification

• Computer Science
J. Mach. Learn. Res.
• 2012
A reduction of active learning to selective classification that preserves fast rates is shown and exponential target-independent label complexity speedup is derived for actively learning general (non-homogeneous) linear classifiers when the data distribution is an arbitrary high dimensional mixture of Gaussians.

### Fast Rates for Online Prediction with Abstention

• Computer Science
COLT
• 2020
It is shown that by allowing the learner to abstain from the prediction by paying a cost marginally smaller than $\frac 12$ (say, $0.49$), it is possible to achieve expected regret bounds that are independent of the time horizon.