Adversarial machine learning

@inproceedings{Huang2011AdversarialML,
  title={Adversarial machine learning},
  author={Ling Huang and Anthony D. Joseph and Blaine Nelson and Benjamin I. P. Rubinstein and J. Doug Tygar},
  booktitle={AISec '11},
  year={2011}
}
  • Ling Huang, A. Joseph, J. Tygar
  • Published in AISec '11 2011
  • Computer Science
In this paper (expanded from an invited talk at AISEC 2010), we discuss an emerging field of study: adversarial machine learning---the study of effective machine learning techniques against an adversarial opponent. In this paper, we: give a taxonomy for classifying attacks against online machine learning algorithms; discuss application-specific factors that limit an adversary's capabilities; introduce two models for modeling an adversary's capabilities; explore the limits of an adversary's… 

Figures from this paper

Security Matters: A Survey on Adversarial Machine Learning
TLDR
This paper serves to give a comprehensive introduction to a range of aspects of the adversarial deep learning topic, including its foundations, typical attacking and defending strategies, and some extended studies.
Rallying Adversarial Techniques against Deep Learning for Network Security
TLDR
It is shown that by modifying on average as little as 1.38 of an observed packet's input features, an adversary can generate malicious inputs that effectively fool a target deep learning-based NIDS systems.
Adversarial and Secure Machine Learning
TLDR
It is shown that several state-of-art learning systems are intrinsically vulnerable under carefully designed adversarial attacks, and countermeasures against adversarial actions are suggested, which inspire discussion of constructing more secure and robust learning algorithms.
Adversarial Examples in Modern Machine Learning: A Review
TLDR
An extensive coverage of machine learning models in the visual domain is provided, furnishing the reader with an intuitive understanding of the mechanics of adversarial attack and defense mechanisms and enlarging the community of researchers studying this fundamental set of problems.
A Survey of Game Theoretic Approaches for Adversarial Machine Learning in Cybersecurity Tasks
TLDR
A detailed survey of the state-of-the-art techniques used to make a machine learning algorithm robust against adversarial attacks using the computational framework of game theory is provided.
Adversarial Machine Learning
  • L. Reznik
  • Computer Science
    Intelligent Security Systems
  • 2021
The chapter introduces novel adversarial machine learning attacks and the taxonomy of its cases, where machine learning is used against AI‐based classifiers to make them fail. It investigates a
Privacy Risks of Securing Machine Learning Models against Adversarial Examples
TLDR
This paper measures the success of membership inference attacks against six state-of-the-art defense methods that mitigate the risk of adversarial examples, and proposes two new inference methods that exploit structural properties of robust models on adversarially perturbed data.
Poisoning Attacks with Generative Adversarial Nets
TLDR
A novel generative model is introduced to craft systematic poisoning attacks against machine learning classifiers generating adversarial training examples, i.e. samples that look like genuine data points but that degrade the classifier's accuracy when used for training.
Adversarial machine learning for cybersecurity and computer vision: Current developments and challenges
  • B. Xi
  • Computer Science
    WIREs Computational Statistics
  • 2020
TLDR
This work provides a comprehensive overview of adversarial machine learning focusing on two application domains, that is, cybersecurity and computer vision, and discusses three main categories of attacks against machine learning techniques—poisoning attacks, evasion attacks, and privacy attacks.
A Survey on Adversarial Machine Learning
TLDR
This survey will categorize different uses of machine learning as a means to attack or defense against security attacks, and the security of machineLearning models that are used every day is also considered.
...
1
2
3
4
5
...

References

SHOWING 1-10 OF 71 REFERENCES
Can machine learning be secure?
TLDR
A taxonomy of different types of attacks on machine learning techniques and systems, a variety of defenses against those attacks, and an analytical model giving a lower bound on attacker's work function are provided.
Bounding an Attack ’ s Complexity for a Simple Learning Model
TLDR
A naive model for assessing the effectiveness of classifiers against threats poised by adversaries determined to subvert the learner by inserting data designed for this purpose is examined.
Evasion Attacks against Machine Learning at Test Time
TLDR
This work presents a simple but effective gradient-based approach that can be exploited to systematically assess the security of several, widely-used classification algorithms against evasion attacks.
Adversarial learning
TLDR
This paper introduces the adversarial classifier reverse engineering (ACRE) learning problem, the task of learning sufficient information about a classifier to construct adversarial attacks, and presents efficient algorithms for reverse engineering linear classifiers with either continuous or Boolean features.
Online Anomaly Detection under Adversarial Impact
TLDR
This work analyzes the performance of a particular method— online centroid anomaly detection—in the presence of adversarial noise, addressing three key security-related issues: derivation of an optimal attack, analysis of its efficiency and constraints, and tightness of the theoretical bounds.
Classifier Evasion: Models and Open Problems
TLDR
This position paper posit several open problems and alternative variants to the near-optimal evasion problem, and suggests solutions to these problems would significantly advance the state-of-the-art in secure machine learning.
The security of machine learning
TLDR
A taxonomy identifying and analyzing attacks against machine learning systems is presented, showing how these classes influence the costs for the attacker and defender, and a formal structure defining their interaction is given.
A framework for quantitative security analysis of machine learning
TLDR
This work exemplarily applies a framework for quantitative security analysis of machine learning methods for the analysis of one specific learning scenario, online centroid anomaly detection, and experimentally verify the tightness of obtained theoretical bounds.
Adversarial classification
TLDR
This paper views classification as a game between the classifier and the adversary, and produces a classifier that is optimal given the adversary's optimal strategy, and experiments show that this approach can greatly outperform a classifiers learned in the standard way.
Classifier evaluation and attribute selection against active adversaries
TLDR
A game theoretic framework where equilibrium behavior of adversarial classification applications can be analyzed, and solutions for finding an equilibrium point are provided.
...
1
2
3
4
5
...