Vulnerabilities in Federated Learning

@article{Bouacida2021VulnerabilitiesIF,
  title={Vulnerabilities in Federated Learning},
  author={Nader Bouacida and Prasant Mohapatra},
  journal={IEEE Access},
  year={2021},
  volume={9},
  pages={63229-63249},
  url={https://api.semanticscholar.org/CorpusID:233465558}
}
A comprehensive survey of the unique security vulnerabilities exposed by the FL ecosystem is provided, highlighting the vulnerabilities sources, key attacks on FL, defenses, as well as their unique challenges, and discussing promising future research directions towards more robust FL.

Figures and Tables from this paper

Security of Federated Learning: Attacks, Defensive Mechanisms, and Challenges

This paper seeks to provide a holistic view of FL’s security concerns, and outlines the most important attacks and vulnerabilities that are highly relevant to FL systems.

Federated Learning with Privacy-preserving and Model IP-right-protection

FedIPR, a novel ownership verification scheme, is introduced by embedding watermarks into FL models to verify the ownership of FL models and protect model intellectual property rights (IPR or IP-right for short).

Federated Learning Privacy: Attacks, Defenses, Applications, and Policy Landscape - A Survey

This survey paper provides a comprehensive literature review of the different privacy attacks and defense methods in federated learning, identifying the current limitations of these attacks and highlighting the settings in which FL client privacy can be broken.

Decentralized Federated Learning: A Survey on Security and Privacy

This survey studies possible variations of threats and adversaries in decentralized federated learning and overviews the potential defense mechanisms and Trustability and verifiability of decentralized federated learning are also considered.

Security and Privacy Issues and Solutions in Federated Learning for Digital Healthcare

Vulnerabilities, attacks, and defenses based on the widened attack surfaces are presented, as well as promising new research directions toward a more robust FL are suggested.

Efficient Verifiable Protocol for Privacy-Preserving Aggregation in Federated Learning

A communication-efficient protocol for secure aggregation of model parameters in a federated learning setting is proposed where training is done on user devices while the aggregated trained model could be constructed on the server side without revealing the raw data of users.

SGDE: Secure Generative Data Exchange for Cross-Silo Federated Learning

SGDE is presented, a generative data exchange protocol that improves user security and machine learning performance in a cross-silo federation and turns out to improve task accuracy and fairness, as well as resilience to the most influential attacks on federated learning.

Federated Learning Attacks and Defenses: A Survey

This paper sorts out the possible attacks and corresponding defenses of the current FL system systematically and divides attack approaches into two categories according to the training stage and the prediction stage in machine learning.

Towards secure and reliable aggregation for Federated Learning protocols in healthcare applications

This work highlights the security challenges in the FL systems and proposes a conceptual solution for a secure and efficient FL protocol based on defensive and compression mechanisms, respectively, which constitutes a significant step towards a reliable aggregation method specifically designed for healthcare.

FedSec: Advanced Threat Detection System for Federated Learning Frameworks

This paper primarily centers its attention on the detection of specific types of attacks, specifically addressing model poisoning, data poisoning, and the Sybil attack.
...

Threats to Federated Learning: A Survey

This paper provides a concise introduction to the concept of FL, and a unique taxonomy covering threat models and two major attacks on FL: 1) poisoning attacks and 2) inference attacks, and provides an accessible review of this important topic.

On Safeguarding Privacy and Security in the Framework of Federated Learning

This work analyzes the privacy and security issues in FL, and discusses several challenges to preserving privacy andSecurity when designing FL systems, and provides extensive simulation results to showcase the discussed issues and possible solutions.

Toward Smart Security Enhancement of Federated Learning Networks

A verify-before-aggregate procedure is developed to identify and remove the non-benign training results from the EDs, and a smart security enhancement framework is presented to protect FLNs effectively and efficiently.

A Hybrid Approach to Privacy-Preserving Federated Learning

This paper presents an alternative approach that utilizes both differential privacy and SMC to balance these trade-offs and enables the growth of noise injection as the number of parties increases without sacrificing privacy while maintaining a pre-defined rate of trust.

Analyzing User-Level Privacy Attack Against Federated Learning

This paper gives the first attempt to explore user-level privacy leakage by the attack from a malicious server, and proposes a framework incorporating GAN with a multi- task discriminator, called multi-task GAN – Auxiliary Identification (mGAN-AI), which simultaneously discriminates category, reality, and client identity of input samples.

Privacy Leakage of Real-World Vertical Federated Learning

This paper considers an honest-but-curious adversary who participants in training a distributed ML model, does not deviate from the defined learning protocol, but attempts to infer private training data from the legitimately received information.

Dynamic backdoor attacks against federated learning

This paper bridges meta-learning and backdoor attacks under FL setting, in which case the algorithm can learn a versatile model from previous experiences, and fast adapting to new adversarial tasks with a few of examples.

A Framework for Evaluating Gradient Leakage Attacks in Federated Learning

This paper provides formal and experimental analysis to show how adversaries can reconstruct the private local training data by simply analyzing the shared parameter update from local training and measures, evaluates, and analyzes the effectiveness of client privacy leakage attacks under different gradient compression ratios when using communication efficient FL protocols.
...