Decentralized Federated Learning Preserves Model and Data Privacy

@inproceedings{Wittkopp2020DecentralizedFL,
  title={Decentralized Federated Learning Preserves Model and Data Privacy},
  author={Thorsten Wittkopp and Alexander Acker},
  booktitle={ICSOC Workshops},
  year={2020}
}
The increasing complexity of IT systems requires solutions, that support operations in case of failure. Therefore, Artificial Intelligence for System Operations (AIOps) is a field of research that is becoming increasingly focused, both in academia and industry. One of the major issues of this area is the lack of access to adequately labeled data, which is majorly due to legal protection regulations or industrial confidentiality. Methods to mitigate this stir from the area of federated learning… 
Applying Federated Learning in Software-Defined Networks: A Survey
TLDR
This paper aims to make a comprehensive survey on the related mechanisms and solutions that enable FL in SDNs, which affect the quality and quantity of participants, the security and privacy in model transferring, and the performance of the global model, respectively.
PPT: A Privacy-Preserving Global Model Training Protocol for Federated Learning in P2P Networks
TLDR
Through extensive analysis, this paper demonstrates that PPT resists various security threats and preserve user privacy and adopts Neighborhood Broadcast, Supervision and Report, and Termination as complementary mechanisms to enhance security and robustness.
ProxyFL: Decentralized Federated Learning through Proxy Model Sharing
TLDR
Experiments on popular image datasets, and a pan-cancer diagnostic problem using over 30,000 high-quality gigapixel histology whole slide images, show that ProxyFL can outperform existing alternatives with much less communication overhead and stronger privacy.
LogLAB: Attention-Based Labeling of Log Data Anomalies via Weak Supervision
TLDR
This work presents LogLAB, a novel modeling approach for automated labeling of log messages without requiring manual work by experts that relies on estimated failure time windows provided by monitoring systems to produce precise labeled datasets in retrospect.
Federated Quantum Machine Learning
TLDR
The distributed federated learning scheme demonstrated almost the same level of trained model accuracies and yet significantly faster distributed training, and demonstrates a promising future research direction for scaling and privacy aspects.
A2Log: Attentive Augmented Log Anomaly Detection
TLDR
A2Log is developed, which is an unsupervised anomaly detection method consisting of two steps: Anomaly scoring and anomaly decision, which outperforms existing methods and can reach scores of the strong baselines.
Distributed Deep Learning in Open Collaborations
TLDR
This work carefully analyze constraints and proposes a novel algorithmic framework designed specifically for collaborative training for SwAV and ALBERT pretraining in realistic conditions and achieves performance comparable to traditional setups at a fraction of the cost.
Artificial Intelligence for IT Operations (AIOPS) Workshop White Paper
TLDR
The main aim of the AIOPS workshop is to bring together researchers from both academia and industry to present their experiences, results, and work in progress in this field to strengthen the community and unite it towards the goal of joining the efforts for solving the main challenges the field is currently facing.
Training Data Reduction for Performance Models of Data Analytics Jobs in the Cloud
TLDR
This paper examines several clustering techniques to minimize training data size while keeping the associated performance models accurate, and indicates that efficiency gains in data transfer, storage, and model training can be achieved through training data reduction.
Emphasizing privacy and security of edge intelligence with machine learning for healthcare
TLDR
The primary focus of edge computing is decentralizing and bringing intelligent IoT devices to provide real-time computing at the point of presence (PoP) to help doctors in predicting the abnormalities and providing customized treatment based on the patient electronic health record (EHR).

References

SHOWING 1-10 OF 24 REFERENCES
Federated Machine Learning: Concept and Applications
TLDR
This work proposes building data networks among organizations based on federated mechanisms as an effective solution to allow knowledge to be shared without compromising user privacy.
Federated Machine Learning
TLDR
This work introduces a comprehensive secure federated-learning framework, which includes horizontal federated learning, vertical federatedLearning, and federated transfer learning, and provides a comprehensive survey of existing works on this subject.
Differentially Private Federated Learning: A Client Level Perspective
TLDR
The aim is to hide clients' contributions during training, balancing the trade-off between privacy loss and model performance, and empirical studies suggest that given a sufficiently large number of participating clients, this procedure can maintain client-level differential privacy at only a minor cost in model performance.
Advances and Open Problems in Federated Learning
TLDR
Motivated by the explosive growth in FL research, this paper discusses recent advances and presents an extensive collection of open problems and challenges.
Sparsified Privacy-Masking for Communication-Efficient and Privacy-Preserving Federated Learning
TLDR
An explict end-to-end privacy guarantee of CPFed is provided using zero-concentrated differential privacy and its theoretical convergence rates for both convex and non-convex models are given.
Poisoning Attacks on Federated Learning-based IoT Intrusion Detection System
TLDR
It is shown that FL-based IoT intrusion detection systems are vulnerable to backdoor attacks, and a novel data poisoning attack is presented that allows an adversary to implant a backdoor into the aggregated detection model to incorrectly classify malicious traffic as benign.
Chained Anomaly Detection Models for Federated Learning: An Intrusion Detection Case Study
TLDR
A permissioned blockchain-based federated learning method where incremental updates to an anomaly detection machine learning model are chained together on the distributed ledger, which supports the auditing of machine learning models without the necessity to centralize the training data.
Privacy-preserving deep learning
  • R. ShokriVitaly Shmatikov
  • Computer Science
    2015 53rd Annual Allerton Conference on Communication, Control, and Computing (Allerton)
  • 2015
TLDR
This paper presents a practical system that enables multiple parties to jointly learn an accurate neural-network model for a given objective without sharing their input datasets, and exploits the fact that the optimization algorithms used in modern deep learning, namely, those based on stochastic gradient descent, can be parallelized and executed asynchronously.
Practical Secure Aggregation for Federated Learning on User-Held Data
TLDR
This work considers training a deep neural network in the Federated Learning model, using distributed stochastic gradient descent across user-held training data on mobile devices, wherein Secure Aggregation protects each user's model gradient.
Communication-Efficient Learning of Deep Networks from Decentralized Data
TLDR
This work presents a practical method for the federated learning of deep networks based on iterative model averaging, and conducts an extensive empirical evaluation, considering five different model architectures and four datasets.
...
...