• Corpus ID: 212644490

Security of Distributed Machine Learning: A Game-Theoretic Approach to Design Secure DSVM

@article{Zhang2020SecurityOD,
  title={Security of Distributed Machine Learning: A Game-Theoretic Approach to Design Secure DSVM},
  author={Rui Zhang and Quanyan Zhu},
  journal={ArXiv},
  year={2020},
  volume={abs/2003.04735}
}
Distributed machine learning algorithms play a significant role in processing massive data sets over large networks. However, the increasing reliance on machine learning on information and communication technologies (ICTs) makes it inherently vulnerable to cyber threats. This work aims to develop secure distributed algorithms to protect the learning from data poisoning and network attacks. We establish a game-theoretic framework to capture the conflicting goals of a learner who uses distributed… 
3 Citations

Figures and Tables from this paper

Distributed Machine Learning for Wireless Communication Networks: Techniques, Architectures, and Applications

The latest applications of DML in power control, spectrum management, user association, and edge cloud computing, and the potential adversarial attacks faced by DML applications are reviewed, and state-of-the-art countermeasures to preserve privacy and security are described.

Pervasive AI for IoT Applications: A Survey on Resource-Efficient Distributed Artificial Intelligence

A comprehensive survey of the recent techniques and strategies developed to overcome resource challenges in pervasive AI systems, and an overview of pervasive computing, its architecture, and its intersection with artificial intelligence.

References

SHOWING 1-10 OF 51 REFERENCES

A game-theoretic analysis of label flipping attacks on distributed support vector machines

  • Rui ZhangQuanyan Zhu
  • Computer Science
    2017 51st Annual Conference on Information Sciences and Systems (CISS)
  • 2017
This work develops a fully distributed and iterative algorithm to capture real-time reactions of the learner at each node to adversarial behaviors and establishes a game-theoretic framework to capture the conflicting goals of a learner who uses distributed support vector machines and an attacker who is capable of flipping training labels.

A Game-Theoretic Approach to Design Secure and Resilient Distributed Support Vector Machines

A game-theoretic framework to capture the conflicting interests between an adversary and a set of distributed data processing units is established and it is proved that the convergence of the distributed algorithm is guaranteed without assumptions on the training data or network topologies.

Secure and resilient distributed machine learning under adversarial environments

  • Rui ZhangQuanyan Zhu
  • Computer Science
    2015 18th International Conference on Information Fusion (Fusion)
  • 2015
A game-theoretic framework is established to capture the conflicting interests between the adversary and a set of distributed data processing units and allows predicting the outcome of learning algorithms in adversarial environment, and enhancing the resilience of the machine learning through dynamic distributed learning algorithms.

Student research highlight: Secure and resilient distributed machine learning under adversarial environments

This work states that under the assumption that training and testing samples come from the same natural distribution, an attacker who can generate or modify training data will lead to misclassification or misestimation.

Evasion Attacks against Machine Learning at Test Time

This work presents a simple but effective gradient-based approach that can be exploited to systematically assess the security of several, widely-used classification algorithms against evasion attacks.

A Game Theoretical Model for Adversarial Learning

  • Wei LiuS. Chawla
  • Computer Science
    2009 IEEE International Conference on Data Mining Workshops
  • 2009
This paper model the interaction between the adversary and the data miner as a two- person sequential noncooperative Stackelberg game and analyze the outcomes when there is a natural leader and a follower.

Secure decentralized data transfer against node capture attacks for wireless sensor networks

This paper proposes a new distribution method resilient against node capture attacks using the Secret Sharing Scheme, and compares it with TinySec, which is the major security architecture of wireless sensor networks.

The security of machine learning

A taxonomy identifying and analyzing attacks against machine learning systems is presented, showing how these classes influence the costs for the attacker and defender, and a formal structure defining their interaction is given.

The Sybil attack in sensor networks: analysis & defenses

It is demonstrated that the Sybil attack can be exceedingly detrimental to many important functions of the sensor network such as routing, resource allocation, misbehavior detection, etc.

Support Vector Machines Under Adversarial Label Noise

This paper assumes that the adversary has control over some training data, and aims to subvert the SVM learning process, and proposes a strategy to improve the robustness of SVMs to training data manipulation based on a simple kernel matrix correction.
...