Benchmarking Differentially Private Residual Networks for Medical Imagery

  title={Benchmarking Differentially Private Residual Networks for Medical Imagery},
  author={Sahib Singh and Harshvardhan Digvijay Sikka},
In this paper we measure the effectiveness of $\epsilon$-Differential Privacy (DP) when applied to medical imaging. We compare two robust differential privacy mechanisms: Local-DP and DP-SGD and benchmark their performance when analyzing medical imagery records. We analyze the trade-off between the model's accuracy and the level of privacy it guarantees, and also take a closer look to evaluate how useful these theoretical privacy guarantees actually prove to be in the real world medical setting… Expand
Neither Private Nor Fair: Impact of Data Imbalance on Utility and Fairness in Differential Privacy
It is demonstrated that even small imbalances and loose privacy guarantees can cause disparate impacts within differentially private deep learning. Expand
Benchmarking Differential Privacy and Federated Learning for BERT Models
This work study the effects that the application of Differential Privacy has, in both a centralized and a Federated Learning (FL) setup, on training contextualized language models (BERT), and offers insights on how to privately train NLP models and what architectures and setups provide more desirable privacy utility trade-offs. Expand
DP-SGD vs PATE: Which Has Less Disparate Impact on Model Accuracy?
It is shown that PATE does have a disparate impact too, however, it is much less severe than DP-SGD, and insights are drawn on what might be promising directions in achieving better fairness-privacy trade-offs. Expand
When Differential Privacy Meets Interpretability: A Case Study
This work proposes an extensive study into the effects of DP training on DNNs, especially on medical imaging applications, on the APTOS dataset. Expand
Privacy in Deep Learning: A Survey
This survey reviews the privacy concerns brought by deep learning, and the mitigating techniques introduced to tackle these issues, and shows that there is a gap in the literature regarding test-time inference privacy. Expand


Differential Privacy for Image Publication
Rapidly generated image data can be shared to advance research and benefit various communities. However, sensitive individual information captured in the data, such as license plates and identities,Expand
Deep Learning with Differential Privacy
This work develops new algorithmic techniques for learning and a refined analysis of privacy costs within the framework of differential privacy, and demonstrates that deep neural networks can be trained with non-convex objectives, under a modest privacy budget, and at a manageable cost in software complexity, training efficiency, and model quality. Expand
Local Differential Privacy: a tutorial
An overview over different LDP algorithms for problems such as locally private heavy hitter identification and spatial data collection and an outlook on open problems in LDP are given. Expand
Secure and Robust Machine Learning for Healthcare: A Survey
An overview of various application areas in healthcare that leverage ML techniques from security and privacy point of view and present associated challenges and potential methods to ensure secure and privacy-preserving ML for healthcare applications is presented. Expand
A generic framework for privacy preserving deep learning
A new framework for privacy preserving deep learning that allows one to implement complex privacy preserving constructs such as Federated Learning, Secure Multiparty Computation, and Differential Privacy while still exposing a familiar deep learning API to the end-user is detailed. Expand
What Can We Learn Privately?
This work investigates learning algorithms that satisfy differential privacy, a notion that provides strong confidentiality guarantees in the contexts where aggregate information is released about a database containing sensitive information about individuals. Expand
Identifying Medical Diagnoses and Treatable Diseases by Image-Based Deep Learning
A diagnostic tool based on a deep-learning framework for the screening of patients with common treatable blinding retinal diseases, which demonstrates performance comparable to that of human experts in classifying age-related macular degeneration and diabetic macular edema. Expand
Secure, privacy-preserving and federated machine learning in medical imaging
An overview of current and next-generation methods for federated, secure and privacy-preserving artificial intelligence with a focus on medical imaging applications, alongside potential attack vectors and future prospects in medical imaging and beyond are presented. Expand
A Principled Approach to Learning Stochastic Representations for Privacy in Deep Neural Inference
This paper sets out to provide a principled approach, dubbed Cloak, that finds optimal stochastic perturbations to obfuscate the private data before it is sent to the cloud while conserving the essential pieces that enable the request to be serviced accurately. Expand
Calibrating Noise to Sensitivity in Private Data Analysis
The study is extended to general functions f, proving that privacy can be preserved by calibrating the standard deviation of the noise according to the sensitivity of the function f, which is the amount that any single argument to f can change its output. Expand