On significance of the least significant bits for differential privacy

@article{Mironov2012OnSO,
  title={On significance of the least significant bits for differential privacy},
  author={Ilya Mironov},
  journal={Proceedings of the 2012 ACM conference on Computer and communications security},
  year={2012}
}
  • Ilya Mironov
  • Published 16 October 2012
  • Computer Science, Mathematics
  • Proceedings of the 2012 ACM conference on Computer and communications security
We describe a new type of vulnerability present in many implementations of differentially private mechanisms. In particular, all four publicly available general purpose systems for differentially private computations are susceptible to our attack. The vulnerability is based on irregularities of floating-point implementations of the privacy-preserving Laplacian mechanism. Unlike its mathematical abstraction, the textbook sampling procedure results in a porous distribution over double-precision… 

Figures and Tables from this paper

Secure Random Sampling in Differential Privacy

TLDR
This paper presents a practical solution to the finiteprecision floating point vulnerability, where the inverse transform sampling of the Laplace distribution can itself be inverted, thus enabling an attack where the original value can be retrieved with non-negligible advantage.

Implementing the Exponential Mechanism with Base-2 Differential Privacy

TLDR
This work examines the practicalities of implementing the exponential mechanism of McSherry and Talwar and shows that the mechanism can be implemented exactly for a rich set of utility functions and values of the privacy parameter epsilon with limited practical overhead in running time and minimal code complexity.

Precision-based attacks and interval refining: how to break, then fix, differential privacy on finite computers

TLDR
This paper highlights a new class of vulnerabilities, which they are called precision-based attacks, and which affect several open source libraries and is proposing a novel technique, called interval refining, which has minimal error, provable privacy, and broad applicability.

Differential Privacy on Finite Computers

TLDR
Strict polynomial-time discrete algorithms for approximate histograms whose simultaneous accuracy matches that of the Laplace Mechanism up to constant factors, while retaining the same (pure) differential privacy guarantee are provided.

Are We There Yet? Timing and Floating-Point Attacks on Differential Privacy Systems

TLDR
This paper examines the Gaussian mechanism’s susceptibility to a floating-point representation attack, and demonstrates that several commonly used, state-of-the-art implementations of differential privacy are susceptible to these attacks.

Preserving differential privacy under finite-precision semantics

TLDR
It is shown that in general there are violations of the differential privacy property, and the conditions under which a limited (but, arguably, acceptable) variant of the property can still be guaranteed under only a minor degradation of the privacy level.

Towards Verifiable Differentially-Private Polling

TLDR
This paper follows an approach based on zero-knowledge proofs, in specific succinct non-interactive arguments of knowledge, as a verifiable computation technique to prove the correctness of a differentially private query output, and ensures the guarantees of differential privacy hold despite the limitations of ZKPs that operate on finite fields and have limited branching capabilities.

An Algorithmic Framework For Differentially Private Data Analysis on Trusted Processors

TLDR
This work proposes a framework based on trusted processors and a new definition of differential privacy called {\em Oblivious Differential Privacy}, which combines the best of both local and global models.

Differentially private data aggregation with optimal utility

TLDR
PrivaDA is presented, a novel design architecture for distributed differential privacy that leverages recent advances in secure multiparty computations on fixed and floating point arithmetics to overcome the previously mentioned limitations.

The Discrete Gaussian for Differential Privacy

TLDR
This work theoretically and experimentally shows that adding discrete Gaussian noise provides essentially the same privacy and accuracy guarantees as the addition of continuousGaussian noise, and presents an simple and efficient algorithm for exact sampling from this distribution.
...

References

SHOWING 1-10 OF 48 REFERENCES

Differential Privacy Under Fire

TLDR
This work presents a detailed design for one specific solution, based on a new primitive the authors call predictable transactions and a simple differentially private programming language, that is effective against remotely exploitable covert channels, at the expense of a higher query completion time.

Noiseless Database Privacy

TLDR
This work proposes a new notion called Noiseless Privacy that provides exact answers to queries, without adding any noise whatsoever, and derives simple rules for composition under models of dynamically changing data.

Distance makes the types grow stronger: a calculus for differential privacy

TLDR
This work proposes to streamline the proving of algorithms to be differentially private one at a time with a functional language whose type system automatically guarantees differential privacy, allowing the programmer to write complex privacy-safe query programs in a flexible and compositional way.

Our Data, Ourselves: Privacy Via Distributed Noise Generation

TLDR
This work provides efficient distributed protocols for generating shares of random noise, secure against malicious participants, and introduces a technique for distributing shares of many unbiased coins with fewer executions of verifiable secret sharing than would be needed using previous approaches.

Differential Privacy with Imperfect Randomness

TLDR
A differentially private mechanism for approximating arbitrary "low sensitivity" functions that works even with randomness coming from a Santha-Vazirani source, for any $$\gamma <1$$, which provides a somewhat surprising "separation" between traditional privacy and differential privacy with respect to imperfect randomness.

On the Impossibility of Private Key Cryptography with Weakly Random Keys

TLDR
The present work begins to answer this question by establishing that a single weakly random source of either model cannot be used to obtain a secure "one-time-pad" type of cryptosystem.

GUPT: privacy preserving data analysis made easy

TLDR
The design and evaluation of a new system, GUPT, that guarantees differential privacy to programs not developed with privacy in mind, makes no trust assumptions about the analysis program, and is secure to all known classes of side-channel attacks.

On the (im)possibility of cryptography with imperfect randomness

TLDR
It is shown that certain cryptographic tasks like bit commitment, encryption, secret sharing, zero-knowledge, non-interactive zero- knowledge, and secure two-party computation for any non-trivial junction are impossible to realize if parties have access to entropy sources with slightly less-than-perfect entropy, i.e., sources with imperfect randomness.

Differentially-private network trace analysis

TLDR
It is concluded that differential privacy shows promise for a broad class of network analyses, though for some of them an approximate expression is required to keep the error-level low.

Differential Privacy

  • C. Dwork
  • Computer Science
    Encyclopedia of Cryptography and Security
  • 2006
TLDR
A general impossibility result is given showing that a formalization of Dalenius' goal along the lines of semantic security cannot be achieved, which suggests a new measure, differential privacy, which, intuitively, captures the increased risk to one's privacy incurred by participating in a database.