An Equivalence Between Private Classification and Online Prediction

@article{Bun2020AnEB,
  title={An Equivalence Between Private Classification and Online Prediction},
  author={Mark Bun and Roi Livni and S. Moran},
  journal={2020 IEEE 61st Annual Symposium on Foundations of Computer Science (FOCS)},
  year={2020},
  pages={389-402}
}
  • Mark Bun, Roi Livni, S. Moran
  • Published 2020
  • Computer Science, Mathematics
  • 2020 IEEE 61st Annual Symposium on Foundations of Computer Science (FOCS)
We prove that every concept class with finite Littlestone dimension can be learned by an (approximate) differentially-private algorithm. This answers an open question of Alon et al. (STOC 2019) who proved the converse statement (this question was also asked by Neel et al. (FOCS 2019)). Together these two results yield an equivalence between online learnability and private PAC learnability. We introduce a new notion of algorithmic stability called “global stability” which is essential to our… Expand

Figures from this paper

A Computational Separation between Private Learning and Online Learning
  • Mark Bun
  • Computer Science, Mathematics
  • NeurIPS
  • 2020
TLDR
It is shown that, assuming the existence of one-way functions, such an efficient conversion is impossible even for general pure-private learners with polynomial sample complexity, which resolves a question of Neel, Roth, and Wu (FOCS 2019). Expand
Littlestone Classes are Privately Online Learnable
TLDR
The results strengthen this connection and show that an online learning algorithm can in fact be directly privatized (in the realizable setting) and provide the first non-trivial regret bound for therealizable setting. Expand
Smoothed Analysis of Online and Differentially Private Learning
TLDR
This paper shows that fundamentally stronger regret and error guarantees are possible with smoothed adversaries than with worst-case adversaries, and obtains regret and privacy error bounds that depend only on the VC dimension and the bracketing number of a hypothesis class, and on the magnitudes of the perturbations. Expand
Learning Privately with Labeled and Unlabeled Examples
TLDR
An alternative approach is suggested, inspired by the (non-private) models of semi-supervised learning and active-learning, where the focus is on the sample complexity of labeled examples whereas unlabeled examples are of a significantly lower cost. Expand
Closure Properties for Private Classification and Online Prediction
TLDR
It is proved close to optimal bounds that circumvents this suboptimal dependency on the Littlestone dimension and improved bounds on the sample complexity of private learning are derived algorithmically via transforming a private learners for the original class $\cH$ to a private learner for the composed class~$\cH'$. Expand
Online Agnostic Boosting via Regret Minimization
TLDR
This work provides the first agnostic online boosting algorithm, which efficiently converts an arbitrary online convex optimizer to an online booster, thus unifying the 4 cases of statistical/online and agnostic/realizable boosting. Expand
Majorizing Measures, Sequential Complexities, and Online Learning
TLDR
This work relates majorizing measures to the notion of fractional covering numbers, which is shown to be dominated in terms of sequential scale-sensitive dimensions in a horizon-independent way, and establishes a tight control on worst-case sequential Rademacher complexity in Terms of the integral of sequential Scale-sensitive dimension. Expand
A Limitation of the PAC-Bayes Framework
TLDR
An easy learning task that is not amenable to a PAC-Bayes analysis is demonstrated, and it is shown that for any algorithm that learns 1-dimensional linear classifiers there exists a (realizable) distribution for which the PAC- Bayes bound is arbitrarily large. Expand
Private learning implies quantum stability
TLDR
Key to many of the results is the construction of a generic quantum online learner, Robust Standard Optimal Algorithm (RSOA), which is robust to adversarial imprecision and various combinatorial parameters. Expand
TPDP'20: 6th Workshop on Theory and Practice of Differential Privacy
TLDR
This workshop aims to bring together a diverse array of researchers and practitioners to provoke stimulating discussion about the current state of differential privacy, in theory and practice. Expand
...
1
2
3
...

References

SHOWING 1-10 OF 64 REFERENCES
Private PAC learning implies finite Littlestone dimension
We show that every approximately differentially private learning algorithm (possibly improper) for a class H with Littlestone dimension d requires Ω(log*(d)) examples. As a corollary it follows thatExpand
What Can We Learn Privately?
TLDR
This work investigates learning algorithms that satisfy differential privacy, a notion that provides strong confidentiality guarantees in the contexts where aggregate information is released about a database containing sensitive information about individuals. Expand
Bounds on the sample complexity for private learning and private data release
TLDR
This work examines several private learning tasks and gives tight bounds on their sample complexity, and shows strong separations between sample complexities of proper and improper private learners (such separation does not exist for non-private learners), and between sample complexity of efficient and inefficient proper private learners. Expand
Characterizing the Sample Complexity of Pure Private Learners
TLDR
A combinatorial characterization of the sample size sufficient and necessary to learn a class of concepts under pure differential privacy is given and a similar characterization holds for the database size needed for computing a large class of optimization problems underpure differential privacy, and also for the well studied problem of private data release. Expand
Privately Answering Classification Queries in the Agnostic PAC Model
TLDR
This work revisits the problem of differentially private release of classification queries in the agnostic PAC model and derives a new upper bound on the private sample complexity. Expand
Privately Learning Thresholds: Closing the Exponential Gap
TLDR
An improved version of the algorithm constructed for the related interior point problem, based on selecting an input-dependent hash function and using it to embed the database into a domain whose size is reduced logarithmically; this results in a new database which can be used to generate an interior point in the original database in a differentially private manner. Expand
Simultaneous Private Learning of Multiple Concepts
TLDR
Lower bounds are given showing that even for very simple concept classes, the sample cost of private multi-learning must grow polynomially in k, and some multi-learners are given that require fewer samples than the basic strategy. Expand
Efficient, Noise-Tolerant, and Private Learning via Boosting
TLDR
A simple framework for designing private boosting algorithms is introduced and is used to construct noise-tolerant and private PAC learners for large-margin halfspaces whose sample complexity does not depend on the dimension. Expand
Sample Complexity Bounds on Differentially Private Learning via Communication Complexity
TLDR
It is shown that the sample complexity of learning with (pure) differential privacy can be arbitrarily higher than the samplecomplexity of learning without the privacy constraint or the sample complex oflearning with approximate differential privacy. Expand
Private Learning Implies Online Learning: An Efficient Reduction
TLDR
An efficient black-box reduction from differentially private learning to online learning from expert Advice is derived from expert advice. Expand
...
1
2
3
4
5
...