Machine learning in adversarial environments
@article{Laskov2010MachineLI, title={Machine learning in adversarial environments}, author={P. Laskov and R. Lippmann}, journal={Machine Learning}, year={2010}, volume={81}, pages={115-119} }
Whenever machine learning is used to prevent illegal or unsanctioned activity and there is an economic incentive, adversaries will attempt to circumvent the protection provided. Constraints on how adversaries can manipulate training and test data for classifiers used to detect suspicious behavior make problems in this area tractable and interesting. This special issue highlights papers that span many disciplines including email spam detection, computer intrusion detection, and detection of web… Expand
Topics from this paper
101 Citations
Detection of Adversarial Training Examples in Poisoning Attacks through Anomaly Detection
- Computer Science, Mathematics
- ArXiv
- 2018
- 47
- PDF
Security Evaluation of Support Vector Machines in Adversarial Environments
- Computer Science
- ArXiv
- 2014
- 79
- PDF
Adding Robustness to Support Vector Machines Against Adversarial Reverse Engineering
- Computer Science
- CIKM
- 2014
- 28
- PDF
Active learning intrusion detection using k-means clustering selection
- Engineering
- SoutheastCon 2017
- 2017
- 12
Bagging Classifiers for Fighting Poisoning Attacks in Adversarial Classification Tasks
- Computer Science
- MCS
- 2011
- 83
- PDF
Vulnerability Detection and Analysis in Adversarial Deep Learning
- Computer Science
- Guide to Vulnerability Analysis for Computer Networks and Systems
- 2018
- 13
References
SHOWING 1-10 OF 46 REFERENCES
Evading network anomaly detection systems: formal reasoning and practical techniques
- Computer Science
- CCS '06
- 2006
- 167
- PDF
Misleading worm signature generators using deliberate noise injection
- Computer Science
- 2006 IEEE Symposium on Security and Privacy (S&P'06)
- 2006
- 183
- PDF
Polygraph: automatically generating signatures for polymorphic worms
- Computer Science, Biology
- 2005 IEEE Symposium on Security and Privacy (S&P'05)
- 2005
- 719
- PDF