• Corpus ID: 54195170

LoAdaBoost: Loss-Based AdaBoost Federated Machine Learning on medical Data

@article{Huang2018LoAdaBoostLA,
  title={LoAdaBoost: Loss-Based AdaBoost Federated Machine Learning on medical Data},
  author={Li Huang and Yifeng Yin and Zeng Fu and Shifa Zhang and Hao Deng and Dianbo Liu},
  journal={ArXiv},
  year={2018},
  volume={abs/1811.12629}
}
Medical data are valuable for improvement of health care, policy making and many other purposes. Vast amount of medical data are stored in different locations, on many different devices and in different data silos. Sharing medical data among different sources is a big challenge due to regulatory, operational and security reasons. One potential solution is federated machine learning ,which is a method that sends machine learning algorithms simultaneously to all data sources, train models in each… 
Federated Learning for Healthcare: Systematic Review and Architecture Proposal
TLDR
A systematic literature review on current research about federated learning in the context of EHR data for healthcare applications and discusses a general architecture for FL applied to healthcare data based on the main insights obtained from the literature review.
Reliability and Performance Assessment of Federated Learning on Clinical Benchmark Data
TLDR
Reliability and performance of federated learning on benchmark datasets including MNIST and MIMIC-III and on datasets that simulated a realistic clinical data distribution show that FL can be suitable to protect privacy when applied to medical data.
FedSemi: An Adaptive Federated Semi-Supervised Learning Framework
TLDR
FedSemi is proposed, a novel, adaptive, and general framework, which firstly introduces the consistency regularization into FL using a teacher-student model and further proposes a new metric to measure the divergence of local model layers.
FedSiam: Towards Adaptive Federated Semi-Supervised Learning
TLDR
The proposed FedSiam framework is built upon a siamese network into FL with a momentum update to handle the non-IID challenges introduced by unlabeled data, and a new metric is proposed to measure the divergence of local model layers within the siamee network.
Two-stage Federated Phenotyping and Patient Representation Learning
TLDR
A two-stage federated natural language processing method is developed that enables utilization of clinical notes from different hospitals or clinics without moving the data, and its performance using obesity and comorbities phenotyping as medical task is demonstrated.
Federated Learning Systems for Healthcare: Perspective and Recent Progress
TLDR
The primary objective of the chapter is to highlight the adaptability and working of the FL techniques in the healthcare system especially in drug development, clinical diagnosis, digital health monitoring, and various disease predictions and detection system.
FedTriNet: A Pseudo Labeling Method with Three Players for Federated Semi-supervised Learning
TLDR
This paper proposes a novel federated semi-supervised learning method named FedTriNet, which consists of two learning phases and uses three networks and a dynamic quality control mechanism to generate high-quality pseudo labels for unlabeled data, which are added to the training set.
FedCon: A Contrastive Framework for Federated Semi-Supervised Learning
TLDR
The proposed FedCon, which introduces a new learning paradigm, i.e., contractive learning, to FedSSL, achieves the best performance with the contractive framework compared with state-of-the-art baselines under both IID and Non-IID settings.
FedSAE: A Novel Self-Adaptive Federated Learning Framework in Heterogeneous Systems
TLDR
A novel self-adaptive federated framework FedSAE is introduced which adjusts the training task of devices automatically and selects participants actively to alleviate the performance degradation and the experimental result indicates that in a highly heterogeneous system,FedSAE converges faster than FedAvg, the vanilla FL framework.
...
...

References

SHOWING 1-10 OF 26 REFERENCES
FADL: Federated-Autonomous Deep Learning for Distributed Electronic Health Record
TLDR
It is shown, using ICU data from 58 different hospitals, that machine learning models to predict patient mortality can be trained efficiently without moving health data out of their silos using a distributed machine learning strategy.
Federated Learning with Non-IID Data
TLDR
This work presents a strategy to improve training on non-IID data by creating a small subset of data which is globally shared between all the edge devices, and shows that accuracy can be increased by 30% for the CIFAR-10 dataset with only 5% globally shared data.
Two-stage Federated Phenotyping and Patient Representation Learning
TLDR
A two-stage federated natural language processing method is developed that enables utilization of clinical notes from different hospitals or clinics without moving the data, and its performance using obesity and comorbities phenotyping as medical task is demonstrated.
Artificial neural networks condensation: A strategy to facilitate adaption of machine learning in medical settings by reducing computational burden
TLDR
This project explored methods to increase computational efficiency of ML algorithms, in particular Artificial Neural Nets (NN), while not compromising the accuracy of the predicted results, and found that some of them even achieved higher accuracy than the pre-condensed baseline models.
Federated Multi-Task Learning
TLDR
This work shows that multi-task learning is naturally suited to handle the statistical challenges of this setting, and proposes a novel systems-aware optimization method, MOCHA, that is robust to practical systems issues.
How To Backdoor Federated Learning
TLDR
This work designs and evaluates a new model-poisoning methodology based on model replacement and demonstrates that any participant in federated learning can introduce hidden backdoor functionality into the joint global model, e.g., to ensure that an image classifier assigns an attacker-chosen label to images with certain features.
Federated Optimization: Distributed Optimization Beyond the Datacenter
We introduce a new and increasingly relevant setting for distributed optimization in machine learning, where the data defining the optimization are distributed (unevenly) over an extremely large
Federated Learning: Strategies for Improving Communication Efficiency
TLDR
Two ways to reduce the uplink communication costs are proposed: structured updates, where the user directly learns an update from a restricted space parametrized using a smaller number of variables, e.g. either low-rank or a random mask; and sketched updates, which learn a full model update and then compress it using a combination of quantization, random rotations, and subsampling.
Communication-Efficient Learning of Deep Networks from Decentralized Data
TLDR
This work presents a practical method for the federated learning of deep networks based on iterative model averaging, and conducts an extensive empirical evaluation, considering five different model architectures and four datasets.
...
...