Security Evaluation of Support Vector Machines in Adversarial Environments

@article{Biggio2014SecurityEO,
  title={Security Evaluation of Support Vector Machines in Adversarial Environments},
  author={B. Biggio and I. Corona and Blaine Nelson and Benjamin I. P. Rubinstein and Davide Maiorca and G. Fumera and G. Giacinto and F. Roli},
  journal={ArXiv},
  year={2014},
  volume={abs/1401.7727}
}
Support vector machines (SVMs) are among the most popular classification techniques adopted in security applications like malware detection, intrusion detection, and spam filtering. However, if SVMs are to be incorporated in real-world security systems, they must be able to cope with attack patterns that can either mislead the learning algorithm (poisoning), evade detection (evasion) or gain information about their internal parameters (privacy breaches). The main contributions of this chapter… Expand
79 Citations
Support vector machines under adversarial label contamination
  • 127
  • PDF
Adversarial Feature Selection Against Evasion Attacks
  • 159
  • PDF
Addressing Adversarial Attacks Against Security Systems Based on Machine Learning
  • 11
  • PDF
Towards Adversarial Malware Detection
  • 8
  • PDF
Adequacy of the Gradient-Descent Method for Classifier Evasion Attacks
  • 5
  • PDF
On learning and recognition of secure patterns
  • PDF
A Survey on Security Threats and Defensive Techniques of Machine Learning: A Data Driven View
  • 156
...
1
2
3
4
5
...

References

SHOWING 1-10 OF 81 REFERENCES
Evasion Attacks against Machine Learning at Test Time
  • 1,007
  • PDF
Security Evaluation of Pattern Classifiers under Attack
  • 315
  • PDF
Poisoning Attacks against Support Vector Machines
  • 691
  • PDF
Security analysis of online centroid anomaly detection
  • 67
  • PDF
Machine learning in adversarial environments
  • 101
  • PDF
Design of robust classifiers for adversarial environments
  • 37
  • PDF
Adversarial classification
  • 721
  • PDF
Adversarial learning
  • 585
  • PDF
...
1
2
3
4
5
...