On the Rényi Differential Privacy of the Shuffle Model

@article{Girgis2021OnTR,
  title={On the R{\'e}nyi Differential Privacy of the Shuffle Model},
  author={Antonious M. Girgis and Deepesh Data and Suhas N. Diggavi and Ananda Theertha Suresh and Peter Kairouz},
  journal={Proceedings of the 2021 ACM SIGSAC Conference on Computer and Communications Security},
  year={2021}
}
The central question studied in this paper is Rényi Differential Privacy (RDP) guarantees for general discrete local randomizers in the shuffle privacy model. In the shuffle model, each of the n clients randomizes its response using a local differentially private (LDP) mechanism and the untrusted server only receives a random permutation (shuffle) of the client responses without association to each client. The principal result in this paper is the first direct RDP bounds for general discrete… 

Figures and Tables from this paper

Renyi Differential Privacy of the Subsampled Shuffle Model in Distributed Learning

TLDR
This paper numerically demonstrates that, for important regimes, with composition the authors' bound yields significant improvement in privacy guarantee over the state-of-the-art approximate Differential Privacy (DP) guarantee (with strong composition) for sub-sampled shuffled models.

Shuffle Gaussian Mechanism for Differential Privacy

TLDR
The Gaussian mechanism’s Rényi differential privacy (RDP) is characterized, showing that it is of the form, and it is proved that the RDP is strictly upper-bounded by the Gaussian RDP without shuffling.

Shuffled Check-in: Privacy Amplification towards Practical Distributed Learning

TLDR
The protocol relies on client making independent and random decision to participate in the computation, freeing the requirement of server-initiated subsampling, and enabling robust modelling of client dropouts, and a weaker trust model known as the shuffle model is employed instead of using a trusted aggregator.

Differentially Private Subgraph Counting in the Shuffle Model

TLDR
This paper proposes accurate subgraph counting algorithms by introducing a recently studied shuffle model and shows that they significantly outperform the one-round local algorithms in terms of accuracy and achieve small estimation errors with a reasonable privacy budget, e.g., smaller than 1 in edge DP.

Differentially Private Stochastic Linear Bandits: (Almost) for Free

TLDR
This paper proposes differentially private algorithms for the problem of stochastic linear bandits in the central, local and shuffled models and achieves almost the same regret as the optimal non-private algorithms.

Differentially Private Triangle and 4-Cycle Counting in the Shuffle Model

TLDR
This paper proposes accurate subgraph counting algorithms by introducing a recently studied shuffle model and shows that these algorithms significantly outperform the one-round local algorithms in terms of accuracy and achieve small estimation errors with a reasonable privacy budget, e.g., smaller than 1 in edge DP.

A Generative Framework for Personalized Learning and Estimation: Theory, Algorithms, and Privacy

TLDR
This work begins with a generative framework that could potentially unify several different algorithms as well as suggest new algorithms, and applies it to personalized estimation, and connects it to the classical empirical Bayes’ methodology.

References

SHOWING 1-10 OF 42 REFERENCES

The Privacy Blanket of the Shuffle Model

TLDR
An optimal single message protocol for summation of real numbers in the shuffle model is provided and has better accuracy and communication than the protocols for this same problem proposed by Cheu et al. (EUROCRYPT 2019).

Hiding Among the Clones: A Simple and Nearly Optimal Analysis of Privacy Amplification by Shuffling

TLDR
This work gives a characterization of the privacy guarantee of the random shuffling of $\mathbf{n}$ data records input to epsilon-differentially private local randomizers that significantly proves over previous work and achieves the asymptotically optimal dependence in Epsilon.

Private Summation in the Multi-Message Shuffle Model

TLDR
Two new protocols for summation in the shuffle model with improved accuracy and communication trade-offs are introduced, including a recursive construction based on the protocol from Balle et al. mentioned above and a novel analysis of the reduction from secure summation to shuffling introduced by Ishai etAl.

Distributed Differential Privacy via Shuffling

TLDR
Evidence that the power of the shuffled model lies strictly between those of the central and local models is given: for a natural restriction of the model, it is shown that shuffled protocols for a widely studied selection problem require exponentially higher sample complexity than do central-model protocols.

Amplification by Shuffling: From Local to Central Differential Privacy via Anonymity

TLDR
It is shown, via a new and general privacy amplification technique, that any permutation-invariant algorithm satisfying e-local differential privacy will satisfy [MATH HERE]-central differential privacy.

On the Power of Multiple Anonymous Messages

TLDR
A nearly tight lower bound on the error of locally-private frequency estimation in the low-privacy (aka high $\epsilon$) regime is obtained and implies that the protocols obtained from the amplification via shuffling work of Erlingsson et al. are essentially optimal for single-message protocols.

Shuffled Model of Differential Privacy in Federated Learning

TLDR
For convex loss functions, it is proved that the proposed CLDP-SGD algorithm matches the known lower bounds on the centralized private ERM while using a finite number of bits per iteration for each client, i.e., effectively getting communication efficiency for “free”.

Scalable and Differentially Private Distributed Aggregation in the Shuffled Model

TLDR
A simple and more efficient protocol for aggregation in the shuffled model, where communication as well as error increases only polylogarithmically in the number of users, is proposed.

The Composition Theorem for Differential Privacy

TLDR
This paper proves an upper bound on the overall privacy level and construct a sequence of privatization mechanisms that achieves this bound by introducing an operational interpretation of differential privacy and the use of a data processing inequality.

RAPPOR: Randomized Aggregatable Privacy-Preserving Ordinal Response

TLDR
This paper describes and motivates RAPPOR, details its differential-privacy and utility guarantees, discusses its practical deployment and properties in the face of different attack models, and gives results of its application to both synthetic and real-world data.