On Adversarial Vulnerability of PHM algorithms: An Initial Study
@article{Yan2021OnAV, title={On Adversarial Vulnerability of PHM algorithms: An Initial Study}, author={Weizhong Yan and Zhaoyuan Yang and Jianwei Qiu}, journal={ArXiv}, year={2021}, volume={abs/2110.07462} }
In almost all PHM applications, driving highest possible performance (prediction accuracy and robustness) of PHM models (fault detection, fault diagnosis and prognostics) has been the top development priority, since PHM models’ performance directly impacts how much business value the PHM models can bring. However, recent research work in other domains, e.g., computer vision (CV), has shown that machine learning (ML) models, especially deep learning models, are vulnerable to adversarial attacks…
Figures and Tables from this paper
References
SHOWING 1-10 OF 25 REFERENCES
Overcoming Adversarial Perturbations in Data-driven Prognostics Through Semantic Structural Context-driven Deep Learning
- Computer Science
- 2020
This work finds that it can introduce obvious errors in prognostics by adding imperceptible noise to a normal input and that the hybrid model with randomization and structural contexts is more robust to adversarial perturbations than the conventional deep neural network.
One Pixel Attack for Fooling Deep Neural Networks
- Computer ScienceIEEE Transactions on Evolutionary Computation
- 2019
This paper proposes a novel method for generating one-pixel adversarial perturbations based on differential evolution (DE), which requires less adversarial information (a black-box attack) and can fool more types of networks due to the inherent features of DE.
Robust Physical-World Attacks on Deep Learning Visual Classification
- Computer Science2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition
- 2018
This work proposes a general attack algorithm, Robust Physical Perturbations (RP2), to generate robust visual adversarial perturbations under different physical conditions and shows that adversarial examples generated using RP2 achieve high targeted misclassification rates against standard-architecture road sign classifiers in the physical world under various environmental conditions, including viewpoints.
Adversarial Examples: Attacks and Defenses for Deep Learning
- Computer ScienceIEEE Transactions on Neural Networks and Learning Systems
- 2019
The methods for generating adversarial examples for DNNs are summarized, a taxonomy of these methods is proposed, and three major challenges in adversarialExamples are discussed and the potential solutions are discussed.
Timing Attacks on Machine Learning: State of the Art
- Computer ScienceIntelliSys
- 2019
This paper brings together the state of the art in theory and practice needed for decision timing attacks on machine learning and defense strategies against them and presents the recently proposed taxonomy for attacks onmachine learning and draws distinctions between other taxonomies.
Towards Evaluating the Robustness of Neural Networks
- Computer Science2017 IEEE Symposium on Security and Privacy (SP)
- 2017
It is demonstrated that defensive distillation does not significantly increase the robustness of neural networks, and three new attack algorithms are introduced that are successful on both distilled and undistilled neural networks with 100% probability are introduced.
Crafting adversarial input sequences for recurrent neural networks
- Computer ScienceMILCOM 2016 - 2016 IEEE Military Communications Conference
- 2016
This paper investigates adversarial input sequences for recurrent neural networks processing sequential data and shows that the classes of algorithms introduced previously to craft adversarial samples misclassified by feed-forward neural networks can be adapted to recurrent Neural networks.
Adversarial Attacks on Deep Neural Networks for Time Series Classification
- Computer Science2019 International Joint Conference on Neural Networks (IJCNN)
- 2019
The results reveal that current state-of-the-art deep learning time series classifiers are vulnerable to adversarial attacks which can have major consequences in multiple domains such as food safety and quality assurance.
Adversarial examples in the physical world
- Computer ScienceICLR
- 2017
It is found that a large fraction of adversarial examples are classified incorrectly even when perceived through the camera, which shows that even in physical world scenarios, machine learning systems are vulnerable to adversarialExamples.
Explaining and Harnessing Adversarial Examples
- Computer ScienceICLR
- 2015
It is argued that the primary cause of neural networks' vulnerability to adversarial perturbation is their linear nature, supported by new quantitative results while giving the first explanation of the most intriguing fact about them: their generalization across architectures and training sets.