Corpus ID: 208248319

Privately Learning Thresholds: Closing the Exponential Gap

@article{Kaplan2020PrivatelyLT,
  title={Privately Learning Thresholds: Closing the Exponential Gap},
  author={Haim Kaplan and Katrina Ligett and Y. Mansour and M. Naor and Uri Stemmer},
  journal={ArXiv},
  year={2020},
  volume={abs/1911.10137}
}
We study the sample complexity of learning threshold functions under the constraint of differential privacy. It is assumed that each labeled example in the training data is the information of one individual and we would like to come up with a generalizing hypothesis $h$ while guaranteeing differential privacy for the individuals. Intuitively, this means that any single labeled example in the training data should not have a significant effect on the choice of the hypothesis. This problem has… Expand
Learning Privately with Labeled and Unlabeled Examples
TLDR
An alternative approach is suggested, inspired by the (non-private) models of semi-supervised learning and active-learning, where the focus is on the sample complexity of labeled examples whereas unlabeled examples are of a significantly lower cost. Expand
Littlestone Classes are Privately Online Learnable
TLDR
The results strengthen this connection and show that an online learning algorithm can in fact be directly privatized (in the realizable setting) and provide the first non-trivial regret bound for therealizable setting. Expand
On the Sample Complexity of Privately Learning Axis-Aligned Rectangles
TLDR
A novel algorithm is presented that reduces the sample complexity of Axis-Aligned-Rectangles over a finite grid X ⊆ R to only Õ ( d· (log∗ |X|) ) , attaining a dimensionality optimal dependency without requiring the sample complex to grow with log |X |. Expand
Closure Properties for Private Classification and Online Prediction
TLDR
It is proved close to optimal bounds that circumvents this suboptimal dependency on the Littlestone dimension and improved bounds on the sample complexity of private learning are derived algorithmically via transforming a private learners for the original class $\cH$ to a private learner for the composed class~$\cH'$. Expand
Best-Arm Identification for Quantile Bandits with Privacy
TLDR
A (non-private) successive elimination algorithm for strictly optimal best-arm identification, which is motivated by applications where the rewards are private, whose sample complexity is finite even for distributions with infinite support-size, and which is characterized by its sample complexity. Expand
Multiclass versus Binary Differentially Private PAC Learning
TLDR
A generic reduction from multiclass differentially private PAC learning to binary private PACLearning is shown and the notion of Ψ-dimension defined in work of Ben-David et al. Expand
A Computational Separation between Private Learning and Online Learning
  • Mark Bun
  • Computer Science, Mathematics
  • NeurIPS
  • 2020
TLDR
It is shown that, assuming the existence of one-way functions, such an efficient conversion is impossible even for general pure-private learners with polynomial sample complexity, which resolves a question of Neel, Roth, and Wu (FOCS 2019). Expand
How to Find a Point in the Convex Hull Privately
TLDR
This paper gives a differentially private algorithm that runs in $O(n^d)$ time, assuming that $n=\Omega(d^4\log X)$. Expand
Quantile Multi-Armed Bandits: Optimal Best-Arm Identification and a Differentially Private Scheme
TLDR
This work proposes a successive elimination algorithm for strictly optimal best-arm identification, shows that it is $\delta $ -PAC and characterize its sample complexity, and provides a lower bound on the expected number of pulls. Expand
A Limitation of the PAC-Bayes Framework
TLDR
An easy learning task that is not amenable to a PAC-Bayes analysis is demonstrated, and it is shown that for any algorithm that learns 1-dimensional linear classifiers there exists a (realizable) distribution for which the PAC- Bayes bound is arbitrarily large. Expand
...
1
2
...

References

SHOWING 1-10 OF 21 REFERENCES
Sample Complexity Bounds on Differentially Private Learning via Communication Complexity
TLDR
It is shown that the sample complexity of learning with (pure) differential privacy can be arbitrarily higher than the samplecomplexity of learning without the privacy constraint or the sample complex oflearning with approximate differential privacy. Expand
Differentially Private Release and Learning of Threshold Functions
TLDR
The first nontrivial lower bound for releasing thresholds with (ε, δ) differential privacy is given, showing that the task is impossible over an infinite domain X, and moreover requires sample complexity n ≥ Ω(log* |X|), which grows with the size of the domain. Expand
Calibrating Noise to Sensitivity in Private Data Analysis
TLDR
The study is extended to general functions f, proving that privacy can be preserved by calibrating the standard deviation of the noise according to the sensitivity of the function f, which is the amount that any single argument to f can change its output. Expand
What Can We Learn Privately?
TLDR
This work investigates learning algorithms that satisfy differential privacy, a notion that provides strong confidentiality guarantees in the contexts where aggregate information is released about a database containing sensitive information about individuals. Expand
The Algorithmic Foundations of Differential Privacy
TLDR
The preponderance of this monograph is devoted to fundamental techniques for achieving differential privacy, and application of these techniques in creative combinations, using the query-release problem as an ongoing example. Expand
Private PAC learning implies finite Littlestone dimension
We show that every approximately differentially private learning algorithm (possibly improper) for a class H with Littlestone dimension d requires Ω(log*(d)) examples. As a corollary it follows thatExpand
Boosting and Differential Privacy
TLDR
This work obtains an $O(\eps^2) bound on the {\em expected} privacy loss from a single $\eps$-\dfp{} mechanism, and gets stronger bounds on the expected cumulative privacy loss due to multiple mechanisms, each of which provides $\eps-differential privacy or one of its relaxations, and each ofWhich operates on (potentially) different, adaptively chosen, databases. Expand
On the complexity of differentially private data release: efficient algorithms and hardness results
TLDR
Private data analysis in the setting in which a trusted and trustworthy curator releases to the public a "sanitization" of the data set that simultaneously protects the privacy of the individual contributors of data and offers utility to the data analyst is considered. Expand
The Complexity of Differential Privacy
  • S. Vadhan
  • Computer Science
  • Tutorials on the Foundations of Cryptography
  • 2017
TLDR
This tutorial provides an introduction to and overview of differential privacy, with the goal of conveying its deep connections to a variety of other topics in computational complexity, cryptography, and theoretical computer science at large. Expand
Mechanism Design via Differential Privacy
  • F. McSherry, Kunal Talwar
  • Computer Science
  • 48th Annual IEEE Symposium on Foundations of Computer Science (FOCS'07)
  • 2007
TLDR
It is shown that the recent notion of differential privacv, in addition to its own intrinsic virtue, can ensure that participants have limited effect on the outcome of the mechanism, and as a consequence have limited incentive to lie. Expand
...
1
2
3
...