Privug: Using Probabilistic Programming for Quantifying Leakage in Privacy Risk Analysis

@inproceedings{Pardo2021PrivugUP,
  title={Privug: Using Probabilistic Programming for Quantifying Leakage in Privacy Risk Analysis},
  author={Ra{\'u}l Pardo and Willard Rafnsson and Christian Probst and Andrzej Wasowski},
  booktitle={ESORICS},
  year={2021}
}
Disclosure of data analytics results has important scientific and commercial justifications. However, no data shall be disclosed without a diligent investigation of risks for privacy of subjects. Privug is a tool-supported method to explore information leakage properties of data analytics and anonymization programs. In Privug, we reinterpret a program probabilistically, using off-the-shelf tools for Bayesian inference to perform information-theoretic analysis of the information flow. For… Expand

Figures and Tables from this paper

References

SHOWING 1-10 OF 67 REFERENCES
The Algorithmic Foundations of Differential Privacy
TLDR
The preponderance of this monograph is devoted to fundamental techniques for achieving differential privacy, and application of these techniques in creative combinations, using the query-release problem as an ongoing example. Expand
Calibrating Noise to Sensitivity in Private Data Analysis
TLDR
A very clean definition of privacy---now known as differential privacy---and measure of its loss are provided, proving that privacy can be preserved by calibrating the standard deviation of the noise according to the {\em sensitivity} of the function $f$. Expand
L-diversity: privacy beyond k-anonymity
TLDR
This paper shows with two simple attacks that a \kappa-anonymized dataset has some subtle, but severe privacy problems, and proposes a novel and powerful privacy definition called \ell-diversity, which is practical and can be implemented efficiently. Expand
LeakWatch: Estimating Information Leakage from Java Programs
TLDR
It is demonstrated how LeakWatch can be used to estimate the size of information leaks in a range of real-world Java programs and a new theoretical result presented in this paper and mutual information is presented. Expand
Calibrating Noise to Sensitivity in Private Data Analysis
TLDR
The study is extended to general functions f, proving that privacy can be preserved by calibrating the standard deviation of the noise according to the sensitivity of the function f, which is the amount that any single argument to f can change its output. Expand
On the Bayes risk in information-hiding protocols
TLDR
A constructive characterization of a convex base of the probability of error is presented, which allows us to compute its maximum value (over all possible input distributions), and to identify upper bounds for it in terms of simple functions. Expand
Synthesis of Probabilistic Privacy Enforcement
TLDR
This work presents a novel synthesis approach that automatically transforms a program into one that complies with a given policy, and phrases the problem of determining the amount of leaked information as Bayesian inference, which enables it to leverage existing probabilistic programming engines. Expand
A Tool for Estimating Information Leakage
TLDR
This work presents leakiEst, a tool that estimates how much information leaks from systems, and calculates the confidence intervals for these estimates, and tests whether they represent real evidence of an information leak in the system. Expand
Belief in information flow
TLDR
A model is developed that describes how attacker beliefs change due to the attacker's observation of the execution of a probabilistic (or deterministic) program, which leads to a new metric for quantitative information flow that measures accuracy rather than uncertainty of beliefs. Expand
Probabilistic Point-to-Point Information Leakage
TLDR
An information leakage model that can measure the leakage between arbitrary points in a probabilistic program and is extended to address both non-terminating programs and user input is developed. Expand
...
1
2
3
4
5
...