Secure neuroimaging analysis using federated learning with homomorphic encryption

  title={Secure neuroimaging analysis using federated learning with homomorphic encryption},
  author={Dimitris Stripelis and Hamza Saleem and Tanmay Ghai and Nikhil J. Dhinagar and Umang Gupta and Chrysovalantis Anastasiou and Greg Ver Steeg and Srivatsan Ravi and Muhammad Naveed and Paul M. Thompson and J. Ambite},
  booktitle={Symposium on Medical Information Processing and Analysis},
Federated learning (FL) enables distributed computation of machine learning models over various disparate, remote data sources, without requiring to transfer any individual data to a centralized location. This results in an improved generalizability of models and efficient scaling of computation as more sources and larger datasets are added to the federation. Nevertheless, recent membership attacks show that private or sensitive personal data can sometimes be leaked or inferred when model… 

Semi-Synchronous Federated Learning for Energy-Efficient Training and Accelerated Convergence in Cross-Silo Settings

A novel energy-efficient Semi-Synchronous Federated Learning protocol that mixes local models periodically with minimal idle time and fast convergence is introduced that significantly outperforms previous work in data and computationally heterogeneous environments.

A novel decentralized federated learning approach to train on globally distributed, poor quality, and protected private medical data

AI accuracy using this approach is found to be comparable to centralized training, and when nodes comprise poor-quality data, which is common in healthcare, AI accuracy can exceed the performance of traditional centralized training.

Secure Federated Learning for Neuroimaging

A Secure Federated Learning architecture, MetisFL, is presented, which enables distributed training of neural networks over multiple data sources without sharing data, and is demonstrated in neuroimaging.

Decentralized Distributed Multi-institutional PET Image Segmentation Using a Federated Deep Learning Framework

Federated DL models could provide robust and generalizable segmentation, while addressing patient privacy and legal and ethical issues in clinical data sharing, and achieve comparable quantitative performance with respect to the centralized DL model.

Enabling Deep Learning for All-in EDGE paradigm

The key performance metrics for Deep Learning at the All-in EDGE paradigm are presented to evaluate various deep learning techniques and choose a suitable design to overcome difficulties due to other requirements such as high computation, high latency, and high bandwidth caused by Deep Learning applications in real-world scenarios.

Privacy-Preserving Aggregation in Federated Learning: A Survey

This survey aims to bridge the gap between a large number of studies on PPFL, where PPAgg is adopted to provide a privacy guarantee, and the lack of a comprehensive survey on the PPAGG protocols applied in FL systems.

Applying Federated Learning in Software-Defined Networks: A Survey

This paper aims to make a comprehensive survey on the related mechanisms and solutions that enable FL in SDNs, which affect the quality and quantity of participants, the security and privacy in model transferring, and the performance of the global model, respectively.

Secure Publish-Process-Subscribe System for Dispersed Computing

This work presents XYZ, a secure publish-process-subscribe system that can preserve the confidentiality of computations and support multi-publisher-multi-subscriber settings, and designs two distinct schemes: the first using Yao’s garbled circuits and the second using homomorphic encryption with proxy re-encryption.

Robust and Privacy-Preserving Collaborative Learning: A Comprehensive Survey

This survey aims to provide a systematic and comprehensive review of security and privacy researches in collaborative learning and provides the system overview of collaborative learning, followed by a brief introduction of integrity and privacy threats.



A Survey on Homomorphic Encryption Schemes: Theory and Implementation

The basics of HE and the details of the well-known Partially Homomorphic Encryption and Somewhat HomomorphicEncryption, which are important pillars of achieving FHE, are presented and the main FHE families, which have become the base for the other follow-up FHE schemes are presented.

Membership Inference Attacks on Deep Regression Models for Neuroimaging

It is demonstrated that allowing access to parameters may leak private information even if data is never directly shared, and feasible attacks on brain age prediction models (deep learning models that predict a person’s age from their brain MRI scan) are demonstrated.

Privacy‐preserving federated learning based on multi‐key homomorphic encryption

The proposed xMK‐CKKS scheme prevents privacy leakage from publicly shared model updates in federated learning and is resistant to collusion between k < N − 1 participating devices and the server.

Scaling Neuroscience Research Using Federated Learning

This work describes the Federated Learning architecture and training policies and demonstrates the approach on a brain age prediction model on structural MRI scans distributed across multiple sites with diverse amounts of data and subject (age) distributions, using the Semi-Synchronous protocol.

Improved Brain Age Estimation With Slice-Based Set Networks

Experiments on the BrainAGE prediction problem showed that the model with the permutation invariant layers trains faster and provides better predictions compared to other state-of-the-art approaches.

Semi-Synchronous Federated Learning

A novel Semi-Synchronous Federated Learning protocol that mixes local models periodically with minimal idle time and fast convergence is introduced that significantly outperforms previous work in data and computationally heterogeneous environments.

Fed-BioMed: A General Open-Source Frontend Framework for Federated Learning in Healthcare

This work proposes an open-source federated learning frontend framework based on a general architecture accommodating for different models and optimization methods, and presents software components for clients and central node, and illustrates the workflow for deploying learning models.

Weight Erosion: An Update Aggregation Scheme for Personalized Collaborative Machine Learning

It is demonstrated that the novel Weight Erosion scheme can outperform two baseline FL aggregation schemes on a classification task, and is more resistant to over-fitting and non-IID data sets.

Federated Gradient Averaging for Multi-Site Training with Momentum-Based Optimizers

Federated gradient averaging is implemented, a variant of federated learning without data transfer that is mathematically equivalent to single site training with centralized data and achieves on average superior results to both FWA and cyclic weight transfer.

On the Fairness of Privacy-Preserving Representations in Medical Applications

A framework for learning invariant fair representations by decomposing the learned representation into target and sensitive codes is proposed and an entropy maximization constraint is imposed on the target code to be invariant to sensitive information.