A Game-Theoretic Analysis of Adversarial Classification

@article{Dritsoula2016AGA,
  title={A Game-Theoretic Analysis of Adversarial Classification},
  author={Lemonia Dritsoula and Patrick Loiseau and John Musacchio},
  journal={IEEE Transactions on Information Forensics and Security},
  year={2016},
  volume={12},
  pages={3094-3109}
}
Attack detection is usually approached as a classification problem. However, standard classification tools often perform poorly, because an adaptive attacker can shape his attacks in response to the algorithm. This has led to the recent interest in developing methods for adversarial classification, but to the best of our knowledge, there have been a very few prior studies that take into account the attacker’s tradeoff between adapting to the classifier being used against him with his desire to… 

Figures from this paper

When Should You Defend Your Classifier - A Game-theoretical Analysis of Countermeasures against Adversarial Examples

The advanced adversarial classification game is proposed, which incorporates all relevant parameters of an adversary and a defender in adversarial Classification and concludes that in practical settings, the most influential factor might be the maximum amount of adversarial examples.

Scalable Optimal Classifiers for Adversarial Settings under Uncertainty

This work proposes a Bayesian game framework where the defender chooses a classifier with no a priori restriction on the set of possible classifiers, and shows that Bayesian Nash equilibria can be characterized completely via functional threshold classifiers with a small number of parameters.

A Game Theoretic Perspective on Adversarial Machine Learning and Related Cybersecurity Applications

In the zero sum game, it is demonstrated that an adversarial SVM model built upon the minimax strategy is much more resilient to adversarial attacks than standard SVM and one‐class SVM models and it is shown that optimal learning strategies derived to counter overly pessimistic attack models can produce unsatisfactory results when the real attacks are much weaker.

A Game Theoretical Error-Correction Framework for Secure Traffic-Sign Classification

A game theoretical error-correction framework is introduced to design classification algorithms that are reliable even in adversarial environments, with a specific focus on traffic-sign classification to achieve reliable and timely performance in classification by redesigning the input space physically to significantly lower dimensions.

Optimal Defense Strategy against Evasion Attacks

This paper presents the C SP’s optimal strategy for effective and safety operation, in which the CSP decides the size of users that the cloud service will provide and whether enhanced countermeasures will be conducted for discovering the possible evasion attacks, and proposes a two-stage Stackelberg game.

Detecting Adversarial Examples - a Lesson from Multimedia Security

It is concluded that adversarial examples for image classification possibly do not withstand detection methods from steganalysis, and future work should explore the effectiveness of known techniques from multimedia security in other adversarial settings.

A System-Driven Taxonomy of Attacks and Defenses in Adversarial Machine Learning

A fine-grained system-driven taxonomy is proposed to specify ML applications and adversarial system models in an unambiguous manner such that independent researchers can replicate experiments and escalate the arms race to develop more evolved and robust ML applications.

Detecting Adversarial Examples - A Lesson from Multimedia Forensics

It is concluded that adversarial examples for image classification possibly do not withstand detection methods from steganalysis, and future work should explore the effectiveness of known techniques from multimedia forensics in other adversarial settings.

A Game Theoretical Framework for Inter-process Adversarial Intervention Detection

This paper proposes defense mechanisms that anticipate the reaction of advanced evaders and seek to maximize the complexity of undetectable attacks at the expense of additional false alarm rate in the system-level.

References

SHOWING 1-10 OF 58 REFERENCES

Game Theoretic Optimization of Detecting Malicious Behavior

An approach filling the gap between practical requirements on adversarial classifiers and properties of the present methods for game theoretic optimization of detecting malicious behavior is developed, enabling restricting a false alarm rate, satisfying a crucial requirement in the security domain.

Optimal randomized classification in adversarial settings

This work significantly generalizes previous results on adversarial classifier reverse engineering (ACRE), showing that if a classifier can be efficiently learned, it can subsequently be efficiently reverse engineered with arbitrary precision.

A game-theoretical approach for finding optimal strategies in an intruder classification game

A game in which a strategic defender classifies an intruder as spy or spammer, based on the number of file server and mail server attacks observed during a fixed window is considered, and a characterization of the Nash equilibria in mixed strategies is given.

Adversarial classification

This paper views classification as a game between the classifier and the adversary, and produces a classifier that is optimal given the adversary's optimal strategy, and experiments show that this approach can greatly outperform a classifiers learned in the standard way.

Scalable Optimization of Randomized Operational Decisions in Adversarial Classification Settings

This work introduces a conceptual separation between learning, used to infer attacker preferences, and operational decisions, which account for adversarial evasion, enforce operational constraints, and naturally admit randomization.

Adversarial support vector machine learning

It is demonstrated that it is possible to develop a much more resilient SVM learning model while making loose assumptions on the data corruption models and that optimal solutions may be overly pessimistic when the actual attacks are much weaker than expected.

Computing the Nash Equilibria of Intruder Classification Games

This work investigates the problem of classifying an intruder of two different types (spy or spammer) and develops parameterized families of payoff functions for both players and analyzes the Nash equilibria of the noncooperative nonzero-sum game.

Adversarial machine learning

A taxonomy for classifying attacks against online machine learning algorithms and the limits of an adversary's knowledge about the algorithm, feature space, training, and input data are given.

Security and Game Theory - Algorithms, Deployed Systems, Lessons Learned

This book is claimed to be the first and only study of long-term deployed applications of game theory for security for key organizations such as the Los Angeles International Airport police and the U.S. Federal Air Marshals Service.

A Game Theoretical Framework on Intrusion Detection in Heterogeneous Networks

  • Lin ChenJ. Leneutre
  • Computer Science
    IEEE Transactions on Information Forensics and Security
  • 2009
This paper addresses the intrusion detection problem in heterogeneous networks consisting of nodes with different noncorrelated security assets by formulating the network intrusion detection as a noncooperative game and performing an in-depth analysis on the Nash equilibrium.
...