Not one but many Tradeoffs: Privacy Vs. Utility in Differentially Private Machine Learning

@article{Zhao2020NotOB,
  title={Not one but many Tradeoffs: Privacy Vs. Utility in Differentially Private Machine Learning},
  author={Benjamin Zi Hao Zhao and M. K{\^a}afar and Nicolas Kourtellis},
  journal={Proceedings of the 2020 ACM SIGSAC Conference on Cloud Computing Security Workshop},
  year={2020}
}
Data holders are increasingly seeking to protect their user's privacy, whilst still maximizing their ability to produce machine learning (ML) models with high quality predictions. In this work, we empirically evaluate various implementations of differential privacy (DP), and measure their ability to fend off real-world privacy attacks, in addition to measuring their core goal of providing accurate classifications. We establish an evaluation framework to ensure each of these implementations are… Expand

Figures and Tables from this paper

Investigating Trade-offs in Utility, Fairness and Differential Privacy in Neural Networks
TLDR
The DPF-NN was found to achieve better risk difference than all the other neural networks with only a marginally lower accuracy than the S-NN and DP-NN, and this model is considered fair as it achieved a risk difference below the strict and lenient thresholds. Expand
FLaaS: Federated Learning as a Service
TLDR
Federated Learning as a Service (FLaaS) is presented, a system enabling different scenarios of 3rd-party application collaborative model building and addressing the consequent challenges of permission and privacy management, usability, and hierarchical model training, and FLaaS can be deployed in different operational environments. Expand
Defending Privacy Against More Knowledgeable Membership Inference Attackers
  • Yu Yin, Ke Chen, L. Shou, Gang Chen
  • Computer Science
  • KDD
  • 2021
Membership Inference Attack (MIA) in deep learning is a common form of privacy attack which aims to infer whether a data sample is in a target classifier's training dataset or not. Previous studiesExpand
PPFL: privacy-preserving federated learning with trusted execution environments
TLDR
A Privacy-preserving Federated Learning (PPFL) framework for mobile systems to limit privacy leakages in federated learning, which can successfully defend the trained model against data reconstruction, property inference, and membership inference attacks. Expand

References

SHOWING 1-10 OF 43 REFERENCES
Evaluating Differentially Private Machine Learning in Practice
TLDR
There is a huge gap between the upper bounds on privacy loss that can be guaranteed, even with advanced mechanisms, and the effective privacy loss which can be measured using current inference attacks. Expand
Privacy Risk in Machine Learning: Analyzing the Connection to Overfitting
TLDR
The effect that overfitting and influence have on the ability of an attacker to learn information about the training data from machine learning models, either through training set membership inference or attribute inference attacks is examined. Expand
Differentially Private Naive Bayes Classification
TLDR
This paper applies the model of differential privacy, which provides a strong privacy guarantee even if adversaries hold arbitrary prior knowledge, to develop a Naive Bayes classifier, which is often used as a baseline and consistently provides reasonable classification performance. Expand
ML-Leaks: Model and Data Independent Membership Inference Attacks and Defenses on Machine Learning Models
TLDR
This most comprehensive study so far on this emerging and developing threat using eight diverse datasets which show the viability of the proposed attacks across domains and proposes the first effective defense mechanisms against such broader class of membership inference attacks that maintain a high level of utility of the ML model. Expand
Differentially Private Random Decision Forests using Smooth Sensitivity
TLDR
A new differentially-private decision forest algorithm is proposed that minimizes both the number of queries required, and the sensitivity of those queries, by using "smooth sensitivity", which takes into account the specific data used in the query rather than assuming the worst-case scenario. Expand
Membership Inference Attacks Against Machine Learning Models
TLDR
This work quantitatively investigates how machine learning models leak information about the individual data records on which they were trained and empirically evaluates the inference techniques on classification models trained by commercial "machine learning as a service" providers such as Google and Amazon. Expand
The Algorithmic Foundations of Differential Privacy
TLDR
The preponderance of this monograph is devoted to fundamental techniques for achieving differential privacy, and application of these techniques in creative combinations, using the query-release problem as an ongoing example. Expand
Diffprivlib: The IBM Differential Privacy Library
TLDR
The IBM Differential Privacy Library is presented, a general purpose, open source library for investigating, experimenting and developing differential privacy applications in the Python programming language. Expand
Functional Mechanism: Regression Analysis under Differential Privacy
TLDR
The main idea is to enforce e-differential privacy by perturbing the objective function of the optimization problem, rather than its results, and it significantly outperforms existing solutions. Expand
Calibrating Noise to Sensitivity in Private Data Analysis
TLDR
The study is extended to general functions f, proving that privacy can be preserved by calibrating the standard deviation of the noise according to the sensitivity of the function f, which is the amount that any single argument to f can change its output. Expand
...
1
2
3
4
5
...