• Corpus ID: 231728784

Dopamine: Differentially Private Federated Learning on Medical Data

  title={Dopamine: Differentially Private Federated Learning on Medical Data},
  author={M. Malekzadeh and Burak Hasircioglu and Nitish Mital and Kunal Katarya and Mehmet Emre Ozfatura and Deniz Gunduz},
While rich medical datasets are hosted in hospitals distributed across the world, concerns on patients’ privacy is a barrier against using such data to train deep neural networks (DNNs) for medical diagnostics. We propose Dopamine, a system to train DNNs on distributed datasets, which employs federated learning (FL) with differentially-private stochastic gradient descent (DPSGD), and, in combination with secure aggregation, can establish a better trade-off between differential privacy (DP… 

Figures from this paper

Communication-efficient federated learning for multi-institutional medical image classification
This paper proposes a communication-efficient FL framework based on the adaptive server-client model transmission that maintains the accuracy on non-i.i.d dataset but also provides a significant reduction in communication cost compared to other FL algorithms.
FedADC: Accelerated Federated Learning with Drift Control
Federated learning has become de facto framework for collaborative learning among edge devices with privacy concern and it is shown that it is possible to address both problems using a single strategy without any major alteration to the FL framework, or introducing additional computation and communication load.
Federated Learning for Smart Healthcare: A Survey
A comprehensive survey on the use of Federated Learning in smart healthcare, including a state-of-the-art review on the emerging applications of FL in key healthcare domains, including health data management, remote health monitoring, medical imaging, and COVID-19 detection.
Efficient Hyperparameter Optimization for Differentially Private Deep Learning
This work forms this problem into a general optimization framework for establishing a desirable privacy-utility tradeoff, and systematically study three cost-effective algorithms for being used in the proposed framework: evolutionary, Bayesian, and reinforcement learning.
A Survey of Security Aggregation
This study investigates the results of secure aggregation protocols in recent years and reviews the research results according to secret sharing, differential privacy, and homomorphic encryption mechanisms, and provides an outlook on the future development ofSecure aggregation protocols and give possible future research directions.
Byzantine-Robust and Privacy-Preserving Framework for FedML
This work proposes to create secure enclaves using a trusted execution environment (TEE) within the server, and performs a novel gradient encoding that enables TEEs to encode the gradients and then offloading Byzantine check computations to accelerators such as GPUs.
Monitoring Motor Activity Data for Detecting Patients’ Depression Using Data Augmentation and Privacy-Preserving Distributed Learning
This paper presents an approach to extract classification models for predicting depression based on a new augmentation technique for motor activity data in a privacy-preserving fashion and demonstrates its performance based on the mental health datasets associated with the Norwegian INTROducing Mental health through Adaptive Technology (INTROMAT) Project.
Distributed Learning in Wireless Networks: Recent Progress and Future Challenges
This paper provides a holistic set of guidelines on how to deploy a broad range of distributed learning frameworks over real-world wireless communication networks, including federated learning, federated distillation, distributed inference, and multi-agent reinforcement learning.
Applications of federated learning in smart cities: recent advances, taxonomy, and open challenges
This paper summarized the latest research on the application of federated learning in various fields of smart cities from the Internet of Things, transportation, communications, finance, medical and other fields and the key technologies and the latest results.
Honest-but-Curious Nets: Sensitive Attributes of Private Inputs Can Be Secretly Coded into the Classifiers' Outputs
This work highlights a vulnerability that can be exploited by malicious machine learning service providers to attack their user's privacy in several seemingly safe scenarios; such as encrypted inferences, computations at the edge, or private knowledge distillation.


Federated and Differentially Private Learning for Electronic Health Records
It is found that while it is straightforward to apply differentially private stochastic gradient descent to achieve strong privacy bounds when training in a centralized setting, it is considerably more difficult to do so in the federated setting.
Privacy-preserving Federated Brain Tumour Segmentation
The feasibility of applying differential-privacy techniques to protect the patient data in a federated learning setup for brain tumour segmentation on the BraTS dataset is investigated and there is a trade-off between model performance and privacy protection costs.
Anonymizing Data for Privacy-Preserving Federated Learning
This paper proposes the first syntactic approach for offering privacy in the context of federated learning, which aims to maximize utility or model performance, while supporting a defensible level of privacy, as demanded by GDPR and HIPAA.
Secure, privacy-preserving and federated machine learning in medical imaging
An overview of current and next-generation methods for federated, secure and privacy-preserving artificial intelligence with a focus on medical imaging applications, alongside potential attack vectors and future prospects in medical imaging and beyond are presented.
A Hybrid Approach to Privacy-Preserving Federated Learning
This paper presents an alternative approach that utilizes both differential privacy and SMC to balance these trade-offs and enables the growth of noise injection as the number of parties increases without sacrificing privacy while maintaining a pre-defined rate of trust.
Deep Learning with Differential Privacy
This work develops new algorithmic techniques for learning and a refined analysis of privacy costs within the framework of differential privacy, and demonstrates that deep neural networks can be trained with non-convex objectives, under a modest privacy budget, and at a manageable cost in software complexity, training efficiency, and model quality.
Multi-Institutional Deep Learning Modeling Without Sharing Patient Data: A Feasibility Study on Brain Tumor Segmentation
This study introduces the first use of federated learning for multi-institutional collaboration, enabling deep learning modeling without sharing patient data, and demonstrates that the performance of Federated semantic segmentation models on multimodal brain scans is similar to that of models trained by sharing data.
Communication-Efficient Learning of Deep Networks from Decentralized Data
This work presents a practical method for the federated learning of deep networks based on iterative model averaging, and conducts an extensive empirical evaluation, considering five different model architectures and four datasets.
Towards Federated Learning at Scale: System Design
A scalable production system for Federated Learning in the domain of mobile devices, based on TensorFlow is built, describing the resulting high-level design, and sketch some of the challenges and their solutions.