Benchmarking Differentially Private Residual Networks for Medical Imagery

@article{Singh2020BenchmarkingDP,
  title={Benchmarking Differentially Private Residual Networks for Medical Imagery},
  author={Sahib Singh and Harshvardhan Digvijay Sikka},
  journal={ArXiv},
  year={2020},
  volume={abs/2005.13099}
}
In this paper we measure the effectiveness of $\epsilon$-Differential Privacy (DP) when applied to medical imaging. We compare two robust differential privacy mechanisms: Local-DP and DP-SGD and benchmark their performance when analyzing medical imagery records. We analyze the trade-off between the model's accuracy and the level of privacy it guarantees, and also take a closer look to evaluate how useful these theoretical privacy guarantees actually prove to be in the real world medical setting… Expand
Neither Private Nor Fair: Impact of Data Imbalance on Utility and Fairness in Differential Privacy
TLDR
It is demonstrated that even small imbalances and loose privacy guarantees can cause disparate impacts within differentially private deep learning. Expand
Benchmarking Differential Privacy and Federated Learning for BERT Models
TLDR
This work study the effects that the application of Differential Privacy has, in both a centralized and a Federated Learning (FL) setup, on training contextualized language models (BERT), and offers insights on how to privately train NLP models and what architectures and setups provide more desirable privacy utility trade-offs. Expand
DP-SGD vs PATE: Which Has Less Disparate Impact on Model Accuracy?
TLDR
It is shown that PATE does have a disparate impact too, however, it is much less severe than DP-SGD, and insights are drawn on what might be promising directions in achieving better fairness-privacy trade-offs. Expand
When Differential Privacy Meets Interpretability: A Case Study
TLDR
This work proposes an extensive study into the effects of DP training on DNNs, especially on medical imaging applications, on the APTOS dataset. Expand
Privacy in Deep Learning: A Survey
TLDR
This survey reviews the privacy concerns brought by deep learning, and the mitigating techniques introduced to tackle these issues, and shows that there is a gap in the literature regarding test-time inference privacy. Expand

References

SHOWING 1-10 OF 27 REFERENCES
Differential Privacy for Image Publication
Rapidly generated image data can be shared to advance research and benefit various communities. However, sensitive individual information captured in the data, such as license plates and identities,Expand
Deep Learning with Differential Privacy
TLDR
This work develops new algorithmic techniques for learning and a refined analysis of privacy costs within the framework of differential privacy, and demonstrates that deep neural networks can be trained with non-convex objectives, under a modest privacy budget, and at a manageable cost in software complexity, training efficiency, and model quality. Expand
Local Differential Privacy: a tutorial
TLDR
An overview over different LDP algorithms for problems such as locally private heavy hitter identification and spatial data collection and an outlook on open problems in LDP are given. Expand
Secure and Robust Machine Learning for Healthcare: A Survey
TLDR
An overview of various application areas in healthcare that leverage ML techniques from security and privacy point of view and present associated challenges and potential methods to ensure secure and privacy-preserving ML for healthcare applications is presented. Expand
A generic framework for privacy preserving deep learning
TLDR
A new framework for privacy preserving deep learning that allows one to implement complex privacy preserving constructs such as Federated Learning, Secure Multiparty Computation, and Differential Privacy while still exposing a familiar deep learning API to the end-user is detailed. Expand
What Can We Learn Privately?
TLDR
This work investigates learning algorithms that satisfy differential privacy, a notion that provides strong confidentiality guarantees in the contexts where aggregate information is released about a database containing sensitive information about individuals. Expand
Identifying Medical Diagnoses and Treatable Diseases by Image-Based Deep Learning
TLDR
A diagnostic tool based on a deep-learning framework for the screening of patients with common treatable blinding retinal diseases, which demonstrates performance comparable to that of human experts in classifying age-related macular degeneration and diabetic macular edema. Expand
Secure, privacy-preserving and federated machine learning in medical imaging
TLDR
An overview of current and next-generation methods for federated, secure and privacy-preserving artificial intelligence with a focus on medical imaging applications, alongside potential attack vectors and future prospects in medical imaging and beyond are presented. Expand
A Principled Approach to Learning Stochastic Representations for Privacy in Deep Neural Inference
TLDR
This paper sets out to provide a principled approach, dubbed Cloak, that finds optimal stochastic perturbations to obfuscate the private data before it is sent to the cloud while conserving the essential pieces that enable the request to be serviced accurately. Expand
Calibrating Noise to Sensitivity in Private Data Analysis
TLDR
The study is extended to general functions f, proving that privacy can be preserved by calibrating the standard deviation of the noise according to the sensitivity of the function f, which is the amount that any single argument to f can change its output. Expand
...
1
2
3
...