Hiding Among the Clones: A Simple and Nearly Optimal Analysis of Privacy Amplification by Shuffling

@article{Feldman2022HidingAT,
  title={Hiding Among the Clones: A Simple and Nearly Optimal Analysis of Privacy Amplification by Shuffling},
  author={Vitaly Feldman and Audra McMillan and Kunal Talwar},
  journal={2021 IEEE 62nd Annual Symposium on Foundations of Computer Science (FOCS)},
  year={2022},
  pages={954-964}
}
Recent work of Erlingsson, Feldman, Mironov, Raghunathan, Talwar, and Thakurta [1] demonstrates that random shuffling amplifies differential privacy guarantees of locally randomized data. Such amplification implies substan-tially stronger privacy guarantees for systems in which data is contributed anonymously [2] and has lead to significant interest in the shuffle model of privacy [3], [1]. We give a characterization of the privacy guarantee of the random shuffling of $\mathbf{n}$ data records… 

Figures and Tables from this paper

The Power of the Differentially Oblivious Shuffle in Distributed Privacy Mechanisms
TLDR
This paper proves an optimal privacy amplification theorem by composing any locally differentially private (LDP) mechanism with a DO-shuffler, achieving parameters that tightly match the shuffle model.
Tight Accounting in the Shuffle Model of Differential Privacy
TLDR
This paper shows how to obtain accurate bounds for adaptive compositions of general ε-LDP shufflers using the analysis by Feldman et al. (2021), and demonstrates looseness of the existing bounds and methods found in the literature, improving previous composition results significantly.
On the Rényi Differential Privacy of the Shuffle Model
TLDR
The principal result in this paper is the first direct RDP bounds for general discrete local randomization in the shuffle privacy model, and new analysis techniques for deriving the results which could be of independent interest.
Tight Differential Privacy Guarantees for the Shuffle Model with k-Randomized Response
TLDR
This paper theoretically derive the strictest known bound for differential privacy guarantee for the shuffle models with k -Randomized Response ( k -RR) local randomizers, under histogram queries, which, to the best of the authors' knowledge, has not been proven before in the existing literature.
A Shuffling Framework for Local Differential Privacy
TLDR
A novel privacy guarantee, dσ-privacy, is proposed that captures the privacy of the order of a data sequence and formalizes the degree the resistance to inference attacks trading it off with data learnability.
Optimal Algorithms for Mean Estimation under Local Differential Privacy
TLDR
This work shows that PrivUnit [BDFKR18] with optimized parameters achieves the optimal variance among a large family of locally private randomizers, and develops a new variant of PrivUnit based on the Gaussian distribution which is more amenable to mathematical analysis and enjoys the same optimality guarantees.
Renyi Differential Privacy of the Subsampled Shuffle Model in Distributed Learning
TLDR
This paper numerically demonstrates that, for important regimes, with composition the authors' bound yields significant improvement in privacy guarantee over the state-of-the-art approximate Differential Privacy (DP) guarantee (with strong composition) for sub-sampled shuffled models.
Aggregation and Transformation of Vector-Valued Messages in the Shuffle Model of Differential Privacy
TLDR
A single message protocol for the summation of real vectors in the Shuffle Model is provided, using advanced composition results, and an improvement on the bound on the error achieved through using this protocol through the implementation of a Discrete Fourier Transform is provided.
Uniformity Testing in the Shuffle Model: Simpler, Better, Faster
TLDR
This work considerably simplify the analysis of the known uniformity testing algorithm in the shuffle model, and provides an alternative algorithm attaining the same guarantees with an elementary and streamlined argument.
DUMP: A Dummy-point-based Local Differential Privacy Enhancement Approach under the Shuffle Model
TLDR
In DUMP, dummy messages are introduced on the user side, creating an additional dummy blanket to further improve the utility of the shuffle model, and this new privacy-preserving mechanism also achieves LDP privacy amplification effect to the user uploaded data against a curious shuffler.
...
1
2
3
4
...

References

SHOWING 1-10 OF 42 REFERENCES
Amplification by Shuffling: From Local to Central Differential Privacy via Anonymity
TLDR
It is shown, via a new and general privacy amplification technique, that any permutation-invariant algorithm satisfying e-local differential privacy will satisfy [MATH HERE]-central differential privacy.
Distributed Differential Privacy via Shuffling
TLDR
Evidence that the power of the shuffled model lies strictly between those of the central and local models is given: for a natural restriction of the model, it is shown that shuffled protocols for a widely studied selection problem require exponentially higher sample complexity than do central-model protocols.
The Complexity of Computing the Optimal Composition of Differential Privacy
TLDR
It is shown that computing the optimal composition in general is $\#$P-complete", and an approximation algorithm is given that computes the composition to arbitrary accuracy in polynomial time.
The Privacy Blanket of the Shuffle Model
TLDR
An optimal single message protocol for summation of real numbers in the shuffle model is provided and has better accuracy and communication than the protocols for this same problem proposed by Cheu et al. (EUROCRYPT 2019).
The Algorithmic Foundations of Differential Privacy
TLDR
The preponderance of this monograph is devoted to fundamental techniques for achieving differential privacy, and application of these techniques in creative combinations, using the query-release problem as an ongoing example.
Heavy Hitters and the Structure of Local Privacy
We present a new locally differentially private algorithm for the heavy hitters problem which achieves optimal worst-case error as a function of all standardly considered parameters. Prior work
Encode, Shuffle, Analyze Privacy Revisited: Formalizations and Empirical Evaluation
TLDR
This work revisits the ESA framework with a simple, abstract model of attackers as well as assumptions covering it and other proposed systems of anonymity, and examines the limitations of sketch-based encodings and ESA mechanisms such as data-dependent crowds.
Local, Private, Efficient Protocols for Succinct Histograms
TLDR
Efficient protocols and matching accuracy lower bounds for frequency estimation in the local model for differential privacy are given and it is shown that each user need only send 1 bit to the server in a model with public coins.
Privacy Profiles and Amplification by Subsampling
TLDR
The privacy profiles machinery are applied to study the so-called ``privacy amplification by subsampling'' principle, which ensures that a differentially private mechanism run on a random subsample of a population provides higher privacy guarantees than when run on the entire population.
Our Data, Ourselves: Privacy Via Distributed Noise Generation
TLDR
This work provides efficient distributed protocols for generating shares of random noise, secure against malicious participants, and introduces a technique for distributing shares of many unbiased coins with fewer executions of verifiable secret sharing than would be needed using previous approaches.
...
1
2
3
4
5
...