• Corpus ID: 245117940

Are We There Yet? Timing and Floating-Point Attacks on Differential Privacy Systems

  title={Are We There Yet? Timing and Floating-Point Attacks on Differential Privacy Systems},
  author={Jiankai Jin and Eleanor McMurtry and Benjamin I. P. Rubinstein and Olga Ohrimenko},
—Differential privacy is a de facto privacy framework that has seen adoption in practice via a number of mature software platforms. Implementation of differentially private (DP) mechanisms has to be done carefully to ensure end-to-end security guarantees. In this paper we study two implementation flaws in the noise generation commonly used in DP systems. First we examine the Gaussian mechanism’s susceptibility to a floating- point representation attack. The premise of this first vulnerability is… 

Figures and Tables from this paper

Getting a-Round Guarantees: Floating-Point Attacks on Certified Robustness
A rounding search method is designed that can exploit the vulnerability to adversarial examples within the certified radius to validate the importance of accounting for the error rate of robustness guarantees of such classi fier in practice.
Doubly Efficient Interactive Proofs over Infinite and Non-Commutative Rings
The core conclusion of the results is that state of the art doubly efficient interactive proofs do not require much algebraic structure, which enables exact rather than approximate computation over infinite rings as well as “agile” proof systems, where the black-box choice of the underlying ring can be easily switched through the software life.
Conflicting Interactions Among Protections Mechanisms for Machine Learning Models
—Nowadays, systems based on machine learning (ML) are widely used in different domains. Given their popu-larity, ML models have become targets for various attacks. As a result, research at the


Implementing the Exponential Mechanism with Base-2 Differential Privacy
This work examines the practicalities of implementing the exponential mechanism of McSherry and Talwar and shows that the mechanism can be implemented exactly for a rich set of utility functions and values of the privacy parameter epsilon with limited practical overhead in running time and minimal code complexity.
Secure Random Sampling in Differential Privacy
This paper presents a practical solution to the finiteprecision floating point vulnerability, where the inverse transform sampling of the Laplace distribution can itself be inverted, thus enabling an attack where the original value can be retrieved with non-negligible advantage.
Differential Privacy on Finite Computers
Strict polynomial-time discrete algorithms for approximate histograms whose simultaneous accuracy matches that of the Laplace Mechanism up to constant factors, while retaining the same (pure) differential privacy guarantee are provided.
Differential Privacy Under Fire
This work presents a detailed design for one specific solution, based on a new primitive the authors call predictable transactions and a simple differentially private programming language, that is effective against remotely exploitable covert channels, at the expense of a higher query completion time.
On significance of the least significant bits for differential privacy
A new type of vulnerability present in many implementations of differentially private mechanisms is described, based on irregularities of floating-point implementations of the privacy-preserving Laplacian mechanism, which allows one to breach differential privacy with just a few queries into the mechanism.
Preserving differential privacy under finite-precision semantics
It is shown that in general there are violations of the differential privacy property, and the conditions under which a limited (but, arguably, acceptable) variant of the property can still be guaranteed under only a minor degradation of the privacy level.
An Algorithmic Framework For Differentially Private Data Analysis on Trusted Processors
This work proposes a framework based on trusted processors and a new definition of differential privacy called {\em Oblivious Differential Privacy}, which combines the best of both local and global models.
The Discrete Gaussian for Differential Privacy
This work theoretically and experimentally shows that adding discrete Gaussian noise provides essentially the same privacy and accuracy guarantees as the addition of continuousGaussian noise, and presents an simple and efficient algorithm for exact sampling from this distribution.
Improving the Gaussian Mechanism for Differential Privacy: Analytical Calibration and Optimal Denoising
An optimal Gaussian mechanism is developed whose variance is calibrated directly using the Gaussian cumulative density function instead of a tail bound approximation and equipped with a post-processing step based on adaptive estimation techniques by leveraging that the distribution of the perturbation is known.
The Distributed Discrete Gaussian Mechanism for Federated Learning with Secure Aggregation
This work presents a comprehensive end-toend system, which appropriately discretizes the data and adds discrete Gaussian noise before performing secure aggregation, and provides a novel privacy analysis for sums of discrete Gaussians.