Adversarial machine learning

  title={Adversarial machine learning},
  author={J. Doug Tygar},
  booktitle={Security and Artificial Intelligence},
  • J. Tygar
  • Published in
    Security and Artificial…
    1 September 2011
  • Computer Science
In this paper (expanded from an invited talk at AISEC 2010), we discuss an emerging field of study: adversarial machine learning---the study of effective machine learning techniques against an adversarial opponent. In this paper, we: give a taxonomy for classifying attacks against online machine learning algorithms; discuss application-specific factors that limit an adversary's capabilities; introduce two models for modeling an adversary's capabilities; explore the limits of an adversary's… 

Figures from this paper

The Threat of Adversarial Attacks on Machine Learning in Network Security - A Survey

This survey provides a taxonomy of machine learning techniques, styles, and algorithms, and introduces an adversarial risk model and evaluates several existing adversarial attacks against machine learning in network security using the risk model.

Security Matters: A Survey on Adversarial Machine Learning

This paper serves to give a comprehensive introduction to a range of aspects of the adversarial deep learning topic, including its foundations, typical attacking and defending strategies, and some extended studies.

Rallying Adversarial Techniques against Deep Learning for Network Security

It is shown that by modifying on average as little as 1.38 of an observed packet's input features, an adversary can generate malicious inputs that effectively fool a target deep learning-based NIDS systems.

Adversarial and Secure Machine Learning

It is shown that several state-of-art learning systems are intrinsically vulnerable under carefully designed adversarial attacks, and countermeasures against adversarial actions are suggested, which inspire discussion of constructing more secure and robust learning algorithms.

Adversarial Examples in Modern Machine Learning: A Review

An extensive coverage of machine learning models in the visual domain is provided, furnishing the reader with an intuitive understanding of the mechanics of adversarial attack and defense mechanisms and enlarging the community of researchers studying this fundamental set of problems.

Support vector machines under adversarial label contamination

Privacy vs Robustness ( against Adversarial Examples ) in Machine Learning ∗

It is found that the adversarial defense methods, although increase the model robustness against adversarial examples, also make the model more vulnerable to membership inference attacks, indicating a potential conflict between privacy and robustness in machine learning.

Hacking Machine Learning: Towards The Comprehensive Taxonomy of Attacks Against Machine Learning Systems

The aim of this article is to provide a comprehensive review of scientific works in the field of cybersecurity of machine learning and to present an original taxonomy of adversarial attacks against machine learning systems in this context.

Wild Patterns: Ten Years After the Rise of Adversarial Machine Learning

A Survey of Game Theoretic Approaches for Adversarial Machine Learning in Cybersecurity Tasks

A detailed survey of the state-of-the-art techniques used to make a machine learning algorithm robust against adversarial attacks using the computational framework of game theory is provided.



Can machine learning be secure?

A taxonomy of different types of attacks on machine learning techniques and systems, a variety of defenses against those attacks, and an analytical model giving a lower bound on attacker's work function are provided.

Bounding an Attack ’ s Complexity for a Simple Learning Model

A naive model for assessing the effectiveness of classifiers against threats poised by adversaries determined to subvert the learner by inserting data designed for this purpose is examined.

Evasion Attacks against Machine Learning at Test Time

This work presents a simple but effective gradient-based approach that can be exploited to systematically assess the security of several, widely-used classification algorithms against evasion attacks.

Adversarial learning

This paper introduces the adversarial classifier reverse engineering (ACRE) learning problem, the task of learning sufficient information about a classifier to construct adversarial attacks, and presents efficient algorithms for reverse engineering linear classifiers with either continuous or Boolean features.

Secure Learning and Learning for Security: Research in the Intersection

This dissertation contributes new results in the intersection of Machine Learning and Security, relating to both of these complementary research agendas.

Online Anomaly Detection under Adversarial Impact

This work analyzes the performance of a particular method— online centroid anomaly detection—in the presence of adversarial noise, addressing three key security-related issues: derivation of an optimal attack, analysis of its efficiency and constraints, and tightness of the theoretical bounds.

Convex Adversarial Collective Classification

A novel method for robustly performing collective classification in the presence of a malicious adversary that can modify up to a fixed number of binary-valued attributes that consistently outperforms both nonadversarial and non-relational baselines.

Machine Learning Methods for Computer Security (Dagstuhl Perspectives Workshop 12371)

This workshop featured twenty-two invited talks from leading researchers within the secure learning community covering topics in adversarial learning, game-theoretic learning, collective classification, privacy-preserving learning, security evaluation metrics, digital forensics, authorship identification, adversarial advertisement detection, learning for offensive security, and data sanitization.

Classifier Evasion: Models and Open Problems

This position paper posit several open problems and alternative variants to the near-optimal evasion problem, and suggests solutions to these problems would significantly advance the state-of-the-art in secure machine learning.

Machine learning in adversarial environments

The four papers in this special issue provide a standard taxonomy of the types of attacks that can be expected in an adversarial framework, demonstrate how to design classifiers that are robust to deleted or corrupted features, and provide approaches to detect web pages designed to manipulate web page scores returned by search engines.