RAPPOR: Randomized Aggregatable Privacy-Preserving Ordinal Response

@article{Erlingsson2014RAPPORRA,
  title={RAPPOR: Randomized Aggregatable Privacy-Preserving Ordinal Response},
  author={{\'U}lfar Erlingsson and Aleksandra Korolova and Vasyl Pihur},
  journal={Proceedings of the 2014 ACM SIGSAC Conference on Computer and Communications Security},
  year={2014}
}
Randomized Aggregatable Privacy-Preserving Ordinal Response, or RAPPOR, is a technology for crowdsourcing statistics from end-user client software, anonymously, with strong privacy guarantees. [...] Key Result This paper describes and motivates RAPPOR, details its differential-privacy and utility guarantees, discusses its practical deployment and properties in the face of different attack models, and, finally, gives results of its application to both synthetic and real-world data.Expand
Privacy-Preserving Bandits
TLDR
Comparisons of the proposed Privacy-Preserving Bandits system with a non-private, as well as a fully-private (local) system, show competitive performance on both synthetic benchmarks and real-world data, suggesting P2B is an effective approach to challenges arising in on-device privacy-preserving personalization. Expand
PPDCA: Privacy-Preserving Crowdsourcing Data Collection and Analysis With Randomized Response
TLDR
A complementary randomized response (C-RR) method to guarantee individuals’ data privacy and to preserve features from the original data for analysis is designed, which uses randomized data in the form of binary vectors to generate a learning network. Expand
Practical and Robust Privacy Amplification with Multi-Party Differential Privacy
TLDR
This paper investigates the multiple-party setting of LDP, analyzes the threat model and identifies potential adversaries, and proposes new techniques that achieve a better privacy-utility tradeoff than existing ones. Expand
Locally Differentially Private Heavy Hitter Identification
TLDR
In this paper, a proposed LDP protocol, which the authors call Prefix Extending Method (PEM), users are divided into groups, with each group reporting a prefix of her value and experiments show that under the same privacy guarantee and computational cost, PEM has better utility on both synthetic and real-world datasets than existing solutions. Expand
A Utility-Optimized Mechanism for Private Data Aggregation
  • Hang Fu, Zhengwei Lei, Minli Zhang
  • 2021 6th International Conference on Big Data and Computing
  • 2021
With the era of big data, it becomes ubiquitous to aggregate and analyze data from millions of users. However, privacy is a critical issue in data aggregation, since data contributed by users mayExpand
BiSample: Bidirectional Sampling for Handling Missing Data with Local Differential Privacy
TLDR
This paper proposes BiSample: a bidirectional sampling technique value perturbation in the framework of LDP, and combines the BiSample mechanism with users' privacy preferences for missing data perturbations. Expand
Renyi Differential Privacy of the Subsampled Shuffle Model in Distributed Learning
TLDR
A privacy-optimization performance trade-off for discrete randomization mechanisms in this sub-sampled shuffle privacy model enables significant improvement in privacy guarantee over the state-of-the-art approximate Differential Privacy (DP) guarantee (with strong composition) for sub- sampled shuffled models. Expand
A Shuffling Framework for Local Differential Privacy
TLDR
A novel privacy guarantee, dσ-privacy, is proposed that captures the privacy of the order of a data sequence and formalizes the degree the resistance to inference attacks trading it off with data learnability. Expand
Successive Refinement of Privacy
TLDR
This work provides (order-wise) tight characterizations of privacy-utility-randomness trade-offs in several cases for distribution estimation, including the standard LDP setting under a randomness constraint, and provides a non-trivial privacy mechanism for multi-level privacy. Expand
On the Rényi Differential Privacy of the Shuffle Model
TLDR
The principal result in this paper is the first direct RDP bounds for general discrete local randomization in the shuffle privacy model, and new analysis techniques for deriving the results which could be of independent interest. Expand
...
1
2
3
4
5
...

References

SHOWING 1-10 OF 33 REFERENCES
Mechanism Design via Differential Privacy
TLDR
It is shown that the recent notion of differential privacv, in addition to its own intrinsic virtue, can ensure that participants have limited effect on the outcome of the mechanism, and as a consequence have limited incentive to lie. Expand
Differential privacy under continual observation
TLDR
This work identifies the problem of maintaining a counter in a privacy preserving manner and shows its wide applicability to many different problems. Expand
Differential Privacy: An Economic Method for Choosing Epsilon
TLDR
A simple model is proposed that expresses the role of differentially private parameters in concrete applications as formulas over a handful of parameters, and is used to choose ε on a series of simple statistical studies. Expand
No free lunch in data privacy
TLDR
This paper argues that privacy of an individual is preserved when it is possible to limit the inference of an attacker about the participation of the individual in the data generating process, different from limiting the inference about the presence of a tuple. Expand
Interactive privacy via the median mechanism
TLDR
The median mechanism is the first privacy mechanism capable of identifying and exploiting correlations among queries in an interactive setting, and an efficient implementation is given, with running time polynomial in the number of queries, the database size, and the domain size. Expand
Privacy via pseudorandom sketches
TLDR
This paper introduces a new technique to describe parts of an individual's data that is based on pseudorandom sketches that guarantee that each individual's privacy is provably maintained and does not rely on any unproven cryptographic conjectures. Expand
Pan-private algorithms via statistics on sketches
TLDR
This work presents the first known lower bounds explicitly for pan privacy, stronger than those implied by differential privacy or dynamic data streaming alone and hold even if unbounded memory and/or unbounded processing time are allowed. Expand
Privacy via the Johnson-Lindenstrauss Transform
TLDR
This work shows that distance computations with privacy is an achievable goal by projecting each user's representation into a random, lower-dimensional space via a sparse Johnson-Lindenstrauss transform and then adding Gaussian noise to each entry of the lower- dimensional representation. Expand
Calibrating Noise to Sensitivity in Private Data Analysis
TLDR
The study is extended to general functions f, proving that privacy can be preserved by calibrating the standard deviation of the noise according to the sensitivity of the function f, which is the amount that any single argument to f can change its output. Expand
Distributed Private Heavy Hitters
TLDR
Efficient algorithms and lower bounds for solving the heavy hitters problem while preserving differential privacy in the fully distributed local model and computationally efficient algorithms even in the case where the data universe N may be exponentially large. Expand
...
1
2
3
4
...