• Corpus ID: 245650297

Differential Privacy Made Easy

  title={Differential Privacy Made Easy},
  author={Muhammad Aitsam},
Data privacy is a major issue for many decades, several techniques have been developed to make sure individual’s privacy but still world has seen privacy failures. In 2006, Cynthia Dwork gave the idea of Differential Privacy which gave strong theoretical guarantees for data privacy. Many companies and research institutes developed differential privacy libraries, but in order to get the differentially private results, users have to tune the privacy parameters. In this paper, we minimized these… 

Figures from this paper



Differential Privacy: A Survey of Results

This survey recalls the definition of differential privacy and two basic techniques for achieving it, and shows some interesting applications of these techniques, presenting algorithms for three specific tasks and three general results on differentially private learning.

diffpriv: An R Package for Easy Differential Privacy

The R package diffpriv provides tools for statistics and machine learning under differential privacy, providing implementations of generic mechanisms for privatizing non-private target functions and an extensible framework for implementing differentially-private mechanisms.

Differential Privacy Under Fire

This work presents a detailed design for one specific solution, based on a new primitive the authors call predictable transactions and a simple differentially private programming language, that is effective against remotely exploitable covert channels, at the expense of a higher query completion time.

The Promise of Differential Privacy: A Tutorial on Algorithmic Techniques

  • C. Dwork
  • Computer Science
    2011 IEEE 52nd Annual Symposium on Foundations of Computer Science
  • 2011
To enjoy the fruits of the research described in this tutorial, the data analyst must accept that raw data can never be accessed directly and that eventually data utility is consumed: overly accurate answers to too many questions will destroy privacy.

An Ad Omnia Approach to Defining and Achieving Private Data Analysis

Two comprehensive, or ad omnia, guarantees for privacy in statistical databases discussed in the literature are examined, note that one is unachievable, and describe implementations of the other.

Generalized Gaussian Mechanism for Differential Privacy

  • Fang Liu
  • Computer Science
    IEEE Transactions on Knowledge and Data Engineering
  • 2019
This paper generalizes the widely used Laplace mechanism to the family of generalized Gaussian (GG) mechanism based on the LaTeX notation, and presents a lower bound on the scale parameter of the Gaussian mechanism of <inline-formula><tex-math notation="LaTeX">$(\epsilon, \delta)$</tex- math><alternatives>-probabilistic DP.

A Multiplicative Weights Mechanism for Privacy-Preserving Data Analysis

A new differentially private multiplicative weights mechanism for answering a large number of interactive counting (or linear) queries that arrive online and may be adaptively chosen, and it is shown that when the input database is drawn from a smooth distribution — a distribution that does not place too much weight on any single data item — accuracy remains as above, and the running time becomes poly-logarithmic in the data universe size.

Using Randomized Response for Differential Privacy Preserving Data Collection

This paper studies how to enforce differential privacy by using the randomized response in the data collection scenario and theoretically derive the explicit formula of the mean squared error of various derived statistics based on the randomized responded theory and proves the randomizedresponse outperforms the Laplace mechanism.

An Enhanced K-Anonymity Model against Homogeneity Attack

A model based on average leakage probability and probability difference of sensitive attribute value, which has reduced the generalization to the data in the most possibility during the procedure and ensures the most effectiveness of quasi-identifier attributes is proposed.

How To Break Anonymity of the Netflix Prize Dataset

This work presents a new class of statistical de-anonymization attacks against high-dimensional micro-data, such as individual preferences, recommendations, transaction records and so on, and demonstrates that an adversary who knows only a little bit about an individual subscriber can easily identify this subscriber's record in the dataset.