Online Data Poisoning Attack
@article{Zhang2019OnlineDP, title={Online Data Poisoning Attack}, author={Xuezhou Zhang and Xiaojin Zhu}, journal={ArXiv}, year={2019}, volume={abs/1903.01666} }
We study data poisoning attacks in the online setting where training items arrive sequentially, and the attacker may perturb the current item to manipulate online learning. Importantly, the attacker has no knowledge of future training items nor the data generating distribution. We formulate online data poisoning attack as a stochastic optimal control problem, and solve it with model predictive control and deep reinforcement learning. We also upper bound the suboptimality suffered by the…
19 Citations
Influence Based Defense Against Data Poisoning Attacks in Online Learning
- Computer Science2022 14th International Conference on COMmunication Systems & NETworkS (COMSNETS)
- 2022
This work proposes a defense mechanism to minimize the degradation caused by the poisoned training data on a learner's model in an online setup by utilizing an influence function which is a classic technique in robust statistics.
Lethean Attack: An Online Data Poisoning Technique
- Computer ScienceArXiv
- 2020
Lethean Attack is introduced, a novel data poisoning technique that induces catastrophic forgetting on an online model and is applied in the context of Test-Time Training, a modern online learning framework aimed for generalization under distribution shifts.
Data Poisoning against Differentially-Private Learners: Attacks and Defenses
- Computer ScienceIJCAI
- 2019
This work designs attack algorithms targeting objective and output perturbation learners, two standard approaches to differentially-private machine learning that are resistant to data poisoning attacks when the adversary is only able to poison a small number of items.
Policy Poisoning in Batch Reinforcement Learning and Control
- Computer ScienceNeurIPS
- 2019
This work presents a unified framework for solving batch policy poisoning attacks, and instantiate the attack on two standard victims: tabular certainty equivalence learner in reinforcement learning and linear quadratic regulator in control.
Gradient-based Data Subversion Attack Against Binary Classifiers
- Computer ScienceArXiv
- 2021
This work develops Gradient-based Data Subversion strategies to achieve model degradation under the assumption that the attacker has limitedknowledge of the victim model and exploits the gradients of a differentiable convex loss function with respect to the predicted label as a warm-start.
Guarantees on learning depth-2 neural networks under a data-poisoning attack
- Computer ScienceArXiv
- 2020
This work demonstrates a specific class of neural networks of finite size and a non-gradient stochastic algorithm which tries to recover the weights of the net generating the realizable true labels in the presence of an oracle doing a bounded amount of malicious additive distortion to the labels.
Learning-based attacks in Cyber-Physical Systems: Exploration, Detection, and Control Cost trade-offs
- Computer ScienceL4DC
- 2021
The problem of learning-based attacks in linear systems, where the communication channel between the controller and the plant can be hijacked by a malicious attacker, is studied and a probabilistic lower bound on the time that must be spent by the attacker learning the system is shown.
State Attack on Autoregressive Models
- Computer Science, Mathematics
- 2019
It is shown that when the environment is linear the attack problem reduces to Linear Quadratic Regulator, and the optimal attack is a Riccati solution, and an approximate attack based on Model Predictive Control and iterative LQR is proposed.
Machine Learning Security: Threats, Countermeasures, and Evaluations
- Computer ScienceIEEE Access
- 2020
This survey systematically analyzes the security issues of machine learning, focusing on existing attacks on machine learning systems, corresponding defenses or secure learning techniques, and security evaluation methods.
Optimal Attack against Autoregressive Models by Manipulating the Environment
- Computer Science, MathematicsAAAI
- 2020
This work describes an optimal adversarial attack formulation against autoregressive time series forecast using Linear Quadratic Regulator (LQR) and combines system identification and Model Predictive Control (MPC) for nonlinear models.
References
SHOWING 1-10 OF 42 REFERENCES
Data Poisoning Attacks against Online Learning
- Computer Science, MathematicsArXiv
- 2018
A systematic investigation of data poisoning attacks for online learning is initiated, and a general attack strategy is proposed, formulated as an optimization problem, that applies to both settings with some modifications.
Data Poisoning Attacks in Contextual Bandits
- Computer ScienceGameSec
- 2018
A general attack framework based on convex optimization is provided and it is shown that by slightly manipulating rewards in the data, an attacker can force the bandit algorithm to pull a target arm for a target contextual vector.
Towards Poisoning of Deep Learning Algorithms with Back-gradient Optimization
- Computer ScienceAISec@CCS
- 2017
This work proposes a novel poisoning algorithm based on the idea of back-gradient optimization, able to target a wider class of learning algorithms, trained with gradient-based procedures, including neural networks and deep learning architectures, and empirically evaluates its effectiveness on several application examples.
Targeted Backdoor Attacks on Deep Learning Systems Using Data Poisoning
- Computer ScienceArXiv
- 2017
This work considers a new type of attacks, called backdoor attacks, where the attacker's goal is to create a backdoor into a learning-based authentication system, so that he can easily circumvent the system by leveraging the backdoor.
Data Poisoning Attacks on Factorization-Based Collaborative Filtering
- Computer ScienceNIPS
- 2016
A data poisoning attack on collaborative filtering systems is introduced and it is demonstrated how a powerful attacker with full knowledge of the learner can generate malicious data so as to maximize his/her malicious objectives, while at the same time mimicking normal user behavior to avoid being detected.
Analysis of Causative Attacks against SVMs Learning from Data Streams
- Computer ScienceIWSPA@CODASPY
- 2017
This work examines the targeted version of this attack on a Support Vector Machine (SVM) that is learning from a data stream, and examines the impact that this attack has on current metrics that are used to evaluate a models performance.
An Optimal Control View of Adversarial Machine Learning
- Computer ScienceArXiv
- 2018
I describe an optimal control view of adversarial machine learning, where the dynamical system is the machine learner, the input are adversarial actions, and the control costs are defined by the…
Support vector machines under adversarial label contamination
- Computer ScienceNeurocomputing
- 2015
Using Machine Teaching to Identify Optimal Training-Set Attacks on Machine Learners
- Computer ScienceAAAI
- 2015
It is shown that optimal training-set attack can be formulated as a bilevel optimization problem and solved efficiently using gradient methods on an implicit function for machine learners with certain Karush-Kuhn-Tucker conditions.
Adversarial Attacks on Stochastic Bandits
- Computer ScienceNeurIPS
- 2018
An adversarial attack against two popular bandit algorithms: $\epsilon$-greedy and UCB, \emph{without} knowledge of the mean rewards is proposed, which means the attacker can easily hijack the behavior of the bandit algorithm to promote or obstruct certain actions.