• Corpus ID: 233307382

Manipulating SGD with Data Ordering Attacks

@inproceedings{Shumailov2021ManipulatingSW,
  title={Manipulating SGD with Data Ordering Attacks},
  author={Ilia Shumailov and Zakhar Shumaylov and Dmitry Kazhdan and Yiren Zhao and Nicolas Papernot and Murat A. Erdogdu and Ross Anderson},
  booktitle={NeurIPS},
  year={2021}
}
Machine learning is vulnerable to a wide variety of attacks. It is now well understood that by changing the underlying data distribution, an adversary can poison the model trained with it or introduce backdoors. In this paper we present a novel class of training-time attacks that require no changes to the underlying dataset or model architecture, but instead only change the order in which data are supplied to the model. In particular, we find that the attacker can either prevent the model from… 

Augmentation Backdoors

Data augmentation is used extensively to improve model generalisation. However, reliance on external libraries to implement augmentation methods introduces a vulnerability into the machine learning

On the Fundamental Limits of Formally (Dis)Proving Robustness in Proof-of-Learning

It is shown that, until the aforementioned open problems are addressed, relying more heavily on cryptography is likely needed to formulate a new class of PoL protocols with formal robustness guarantees, and that establishing precedence robustly also reduces to an open problem in learning theory.

Untargeted Backdoor Watermark: Towards Harmless and Stealthy Dataset Copyright Protection

Deep neural networks (DNNs) have demonstrated their superiority in practice. Arguably, the rapid development of DNNs is largely benefited from high-quality (open-sourced) datasets, based on which

ImpNet: Imperceptible and blackbox-undetectable backdoors in compiled neural networks

Early backdoor attacks against machine learning set o ff an arms race in attack and defence development. Defences have since appeared demonstrating some ability to detect backdoors in models or even

Black-box Ownership Verification for Dataset Protection via Backdoor Watermarking

This paper forms the protection of released datasets as verifying whether they are adopted for training a (suspicious) third-party model, where defenders can only query the model while having no information about its parameters and training details.

Discrete Key-Value Bottleneck

A model architecture is proposed, building upon a discrete bottleneck containing pairs of separate and learnable (key, value) codes, that reduces the complexity of the hypothesis class and reduces the common vulnerability to non-i.i.d. and non-stationary training distributions.

ARCANE: An Efficient Architecture for Exact Machine Unlearning

This paper proposes an exact unlearning architecture called ARCANE, based on ensemble learning, which transforms the naive retraining into multiple one-class classification tasks to reduce retraining cost while ensuring model performance, especially in the case of a large number of unlearning requests not considered by previous works.

Architectural Backdoors in Neural Networks

This paper introduces a new class of backdoor attacks that hide inside model architectures i.e. in the inductive bias of the functions used to train, and formalises the main construction principles behind architectural backdoors, such as a link between the input and the output, and describes some possible protections against them.

Sampling without Replacement Leads to Faster Rates in Finite-Sum Minimax Optimization

The convergence rates of stochastic gradient algorithms for smooth and strongly convex-strongly concave optimization are analyzed and it is shown that, for many such algorithms, sampling the data points without replacement leads to faster convergence compared to sampling with replacement.

Indiscriminate Data Poisoning Attacks on Neural Networks

This work designs poisoning attacks that exploit modern auto-differentiation packages and allow simultaneous and coordinated generation of tens of thousands of poisoned points, in contrast to existing methods that generate poisoned points one by one.

References

SHOWING 1-10 OF 78 REFERENCES

Practical Black-Box Attacks against Machine Learning

This work introduces the first practical demonstration of an attacker controlling a remotely hosted DNN with no such knowledge, and finds that this black-box attack strategy is capable of evading defense strategies previously found to make adversarial example crafting harder.

Dynamic Backdoor Attacks Against Machine Learning Models

This paper proposes the first class of dynamic backdooring techniques against deep neural networks (DNN), namely Random Backdoor, Backdoor Generating Network (BaN), and conditional Backdoor Generator Network (c-BaN) which can bypass current state-of-the-art defense mechanisms against backdoor attacks.

Evasion Attacks against Machine Learning at Test Time

This work presents a simple but effective gradient-based approach that can be exploited to systematically assess the security of several, widely-used classification algorithms against evasion attacks.

Poisoned classifiers are not only backdoored, they are fundamentally broken

It is argued that there is no such thing as a "secret" backdoor in poisoned classifiers: poisoning a classifier invites attacks not just by the party that possesses the trigger, but from anyone with access to the classifier.

Manipulating Machine Learning: Poisoning Attacks and Countermeasures for Regression Learning

A theoretically-grounded optimization framework specifically designed for linear regression and its effectiveness on a range of datasets and models is demonstrated and formal guarantees about its convergence and an upper bound on the effect of poisoning attacks when the defense is deployed are provided.

BadNets: Identifying Vulnerabilities in the Machine Learning Model Supply Chain

It is shown that outsourced training introduces new security risks: an adversary can create a maliciously trained network (a backdoored neural network, or a BadNet) that has state-of-the-art performance on the user's training and validation samples, but behaves badly on specific attacker-chosen inputs.

Targeted Backdoor Attacks on Deep Learning Systems Using Data Poisoning

This work considers a new type of attacks, called backdoor attacks, where the attacker's goal is to create a backdoor into a learning-based authentication system, so that he can easily circumvent the system by leveraging the backdoor.

Poison Frogs! Targeted Clean-Label Poisoning Attacks on Neural Networks

This paper explores poisoning attacks on neural nets using "clean-labels", an optimization-based method for crafting poisons, and shows that just one single poison image can control classifier behavior when transfer learning is used.

Towards Deep Learning Models Resistant to Adversarial Attacks

This work studies the adversarial robustness of neural networks through the lens of robust optimization, and suggests the notion of security against a first-order adversary as a natural and broad security guarantee.

On the Security Relevance of Initial Weights in Deep Neural Networks

It is shown that the threat is broader: A task-independent permutation on the initial weights suffices to limit the achieved accuracy to for example 50% on the Fashion MNIST dataset from initially more than 90%.
...