• Corpus ID: 16593492

Scalable Optimization of Randomized Operational Decisions in Adversarial Classification Settings

@inproceedings{Li2015ScalableOO,
  title={Scalable Optimization of Randomized Operational Decisions in Adversarial Classification Settings},
  author={Bo Li and Yevgeniy Vorobeychik},
  booktitle={AISTATS},
  year={2015}
}
When learning, such as classification, is used in adversarial settings, such as intrusion detection, intelligent adversaries will attempt to evade the resulting policies. [] Key Method Our approach gives rise to an intractably large linear program. To overcome scalability limitations, we introduce a novel method for estimating a compact parity basis representation for the operational decision function.
Using Machine Learning in Adversarial Environments.
TLDR
This work proposes to embed machine learning within a game theoretic framework that performs adversarial modeling, develops methods for optimizing operational response based on ML, and integrates the resulting optimization codebase into the existing ML infrastructure developed by the Hybrid LDRD.
Evasion-Robust Classification on Binary Domains
TLDR
This approach is the first to compute an optimal solution to adversarial loss minimization for two general classes of adversarial evasion models in the context of binary feature spaces and is robust to misspecifications of the adversarial model.
A General Retraining Framework for Scalable Adversarial Classification
TLDR
It is shown that, under natural conditions, the retraining framework minimizes an upper bound on optimal adversarial risk, and how to extend this result to account for approximations of evasion attacks.
Scalable Optimal Classifiers for Adversarial Settings under Uncertainty
TLDR
This work proposes a Bayesian game framework where the defender chooses a classifier with no a priori restriction on the set of possible classifiers, and shows that Bayesian Nash equilibria can be characterized completely via functional threshold classifiers with a small number of parameters.
Game Theoretic Optimization of Detecting Malicious Behavior
TLDR
An approach filling the gap between practical requirements on adversarial classifiers and properties of the present methods for game theoretic optimization of detecting malicious behavior is developed, enabling restricting a false alarm rate, satisfying a crucial requirement in the security domain.
A Game-Theoretic Analysis of Adversarial Classification
TLDR
An efficient algorithm to compute all Nash equilibria and a compact characterization of the possible forms of a Nash equilibrium that reveals intuitive messages on how to perform classification in the presence of an attacker are provided.
Characterizing Attacks on Deep Reinforcement Learning
TLDR
The first targeted attacks based on action space and environment dynamics based on temporal consistency information among frames are proposed and introduced, and a sampling strategy is proposed to better estimate gradient in black-box setting.
Generating Adversarial Examples with Adversarial Networks
TLDR
Adversarial examples generated by AdvGAN on different target models have high attack success rate under state-of-the-art defenses compared to other attacks, and have placed the first with 92.76% accuracy on a public MNIST black-box attack challenge.
Adversarial Regression with Multiple Learners
TLDR
This work approximates the result of adversarial linear regression with multiple learners by exhibiting an upper bound on learner loss functions, and shows that the resulting game has a unique symmetric equilibrium.
Characterizing Adversarial Subspaces Using Local Intrinsic Dimensionality
TLDR
The analysis of the LID characteristic for adversarial regions not only motivates new directions of effective adversarial defense, but also opens up more challenges for developing new attacks to better understand the vulnerabilities of DNNs.
...
...

References

SHOWING 1-10 OF 21 REFERENCES
Adversarial machine learning
TLDR
A taxonomy for classifying attacks against online machine learning algorithms and the limits of an adversary's knowledge about the algorithm, feature space, training, and input data are given.
Predictive defense against evolving adversaries
  • R. Colbaugh, K. Glass
  • Computer Science
    2012 IEEE International Conference on Intelligence and Security Informatics
  • 2012
TLDR
This paper leverages the coevolutionary relationship between attackers and defenders to derive two new approaches to predictive defense, in which future attack techniques are anticipated and these insights are incorporated into defense designs.
Query Strategies for Evading Convex-Inducing Classifiers
TLDR
This work generalizes the theory of Lowd and Meek (2005) to the family of convex-inducing classifiers that partition their feature space into two sets, one of which is convex and demonstrates that nearoptimal evasion can be accomplished for this family without reverse engineering the classifier's decision boundary.
Convex Adversarial Collective Classification
TLDR
A novel method for robustly performing collective classification in the presence of a malicious adversary that can modify up to a fixed number of binary-valued attributes that consistently outperforms both nonadversarial and non-relational baselines.
Adversarial Pattern Classification Using Multiple Classifiers and Randomisation
TLDR
This work considers a strategy consisting in hiding information about the classifier to the adversary through the introduction of some randomness in the decision function, based on an analytical framework for adversarial classification problems recently proposed by other authors.
Adversarial learning
TLDR
This paper introduces the adversarial classifier reverse engineering (ACRE) learning problem, the task of learning sufficient information about a classifier to construct adversarial attacks, and presents efficient algorithms for reverse engineering linear classifiers with either continuous or Boolean features.
Adversarial classification
TLDR
This paper views classification as a game between the classifier and the adversary, and produces a classifier that is optimal given the adversary's optimal strategy, and experiments show that this approach can greatly outperform a classifiers learned in the standard way.
Stackelberg games for adversarial prediction problems
TLDR
This work model the interaction between learner and data generator as a Stackelberg competition in which the learner plays the role of the leader and the data generator may react on the leader's move and shows that the StACkelberg prediction game generalizes existing prediction models.
Playing games for security: an efficient exact algorithm for solving Bayesian Stackelberg games
TLDR
This paper considers Bayesian Stackelberg games, in which the leader is uncertain about the types of adversary it may face, and presents an efficient exact algorithm for finding the optimal strategy for the leader to commit to in these games.
Game-theoretic resource allocation for malicious packet detection in computer networks
TLDR
Grande, a novel polynomial time algorithm that uses an approximated utility function to circumvent the limited scalability caused by the attacker's large strategy space and the non-linearity of the aforementioned mathematical program is proposed.
...
...