Corpus ID: 238634730

On the Security Risks of AutoML

@article{Pang2021OnTS,
  title={On the Security Risks of AutoML},
  author={Ren Pang and Zhaohan Xi and Shouling Ji and Xiapu Luo and Ting Wang},
  journal={ArXiv},
  year={2021},
  volume={abs/2110.06018}
}
  • Ren Pang, Zhaohan Xi, +2 authors Ting Wang
  • Published 12 October 2021
  • Computer Science
  • ArXiv
Neural Architecture Search (NAS) represents an emerging machine learning (ML) paradigm that automatically searches for models tailored to given tasks, which greatly simplifies the development of ML systems and propels the trend of ML democratization. Yet, little is known about the potential security risks incurred by NAS, which is concerning given the increasing use of NAS-generated models in critical domains. This work represents a solid initial step towards bridging the gap. Through an… Expand

References

SHOWING 1-10 OF 65 REFERENCES
Model-Reuse Attacks on Deep Learning Systems
TLDR
It is demonstrated that malicious primitive models pose immense threats to the security of ML systems, and analytical justification for the effectiveness of model-reuse attacks is provided, which points to the unprecedented complexity of today's primitive models. Expand
Towards Evaluating the Robustness of Neural Networks
TLDR
It is demonstrated that defensive distillation does not significantly increase the robustness of neural networks, and three new attack algorithms are introduced that are successful on both distilled and undistilled neural networks with 100% probability are introduced. Expand
When NAS Meets Robustness: In Search of Robust Architectures Against Adversarial Attacks
TLDR
This work takes an architectural perspective and investigates the patterns of network architectures that are resilient to adversarial attacks, and discovers a family of robust architectures (RobNets) that exhibit superior robustness performance to other widely used architectures. Expand
Towards Deep Learning Models Resistant to Adversarial Attacks
TLDR
This work studies the adversarial robustness of neural networks through the lens of robust optimization, and suggests the notion of security against a first-order adversary as a natural and broad security guarantee. Expand
Stealing Machine Learning Models via Prediction APIs
TLDR
Simple, efficient attacks are shown that extract target ML models with near-perfect fidelity for popular model classes including logistic regression, neural networks, and decision trees against the online services of BigML and Amazon Machine Learning. Expand
DEEPSEC: A Uniform Platform for Security Analysis of Deep Learning Model
TLDR
This paper presents the design, implementation, and evaluation of DEEPSEC, a uniform platform that aims to bridge the gap between comprehensive evaluation on adversarial attacks and defenses and demonstrates its capabilities and advantages as a benchmark platform which can benefit future adversarial learning research. Expand
Trojaning Attack on Neural Networks
TLDR
A trojaning attack on neuron networks that can be successfully triggered without affecting its test accuracy for normal input data, and it only takes a small amount of time to attack a complex neuron network model. Expand
Wild Patterns: Ten Years After the Rise of Adversarial Machine Learning
TLDR
This tutorial introduces the fundamentals of adversarial machine learning to the security community, and presents novel techniques that have been recently proposed to assess performance of pattern classifiers and deep learning algorithms under attack, evaluate their vulnerabilities, and implement defense strategies that make learning algorithms more robust to attacks. Expand
BadNets: Identifying Vulnerabilities in the Machine Learning Model Supply Chain
TLDR
It is shown that outsourced training introduces new security risks: an adversary can create a maliciously trained network (a backdoored neural network, or a BadNet) that has state-of-the-art performance on the user's training and validation samples, but behaves badly on specific attacker-chosen inputs. Expand
Black-box Adversarial Attacks with Limited Queries and Information
TLDR
This work defines three realistic threat models that more accurately characterize many real-world classifiers: the query-limited setting, the partial-information setting, and the label-only setting and develops new attacks that fool classifiers under these more restrictive threat models. Expand
...
1
2
3
4
5
...