Machine learning in adversarial environments

@article{Laskov2010MachineLI,
  title={Machine learning in adversarial environments},
  author={P. Laskov and R. Lippmann},
  journal={Machine Learning},
  year={2010},
  volume={81},
  pages={115-119}
}
Whenever machine learning is used to prevent illegal or unsanctioned activity and there is an economic incentive, adversaries will attempt to circumvent the protection provided. Constraints on how adversaries can manipulate training and test data for classifiers used to detect suspicious behavior make problems in this area tractable and interesting. This special issue highlights papers that span many disciplines including email spam detection, computer intrusion detection, and detection of web… Expand
101 Citations
Security Evaluation of Support Vector Machines in Adversarial Environments
  • 79
  • PDF
Adding Robustness to Support Vector Machines Against Adversarial Reverse Engineering
  • 28
  • PDF
Active learning intrusion detection using k-means clustering selection
  • 12
Bagging Classifiers for Fighting Poisoning Attacks in Adversarial Classification Tasks
  • 83
  • PDF
Feature Cross-Substitution in Adversarial Classification
  • 88
  • PDF
Vulnerability Detection and Analysis in Adversarial Deep Learning
  • 13
...
1
2
3
4
5
...

References

SHOWING 1-10 OF 46 REFERENCES
Adversarial classification
  • 721
  • PDF
The security of machine learning
  • 548
  • PDF
Adversarial learning
  • 585
  • PDF
Online Anomaly Detection under Adversarial Impact
  • 110
  • PDF
Learning in the presence of malicious errors
  • 386
  • PDF
Evading network anomaly detection systems: formal reasoning and practical techniques
  • 167
  • PDF
Polymorphic Blending Attacks
  • 262
  • PDF
Misleading worm signature generators using deliberate noise injection
  • 183
  • PDF
On the infeasibility of modeling polymorphic shellcode
  • 131
  • PDF
Polygraph: automatically generating signatures for polymorphic worms
  • 719
  • PDF
...
1
2
3
4
5
...