On Adversarial Vulnerability of PHM algorithms: An Initial Study

@article{Yan2021OnAV,
  title={On Adversarial Vulnerability of PHM algorithms: An Initial Study},
  author={Weizhong Yan and Zhaoyuan Yang and Jianwei Qiu},
  journal={ArXiv},
  year={2021},
  volume={abs/2110.07462}
}
In almost all PHM applications, driving highest possible performance (prediction accuracy and robustness) of PHM models (fault detection, fault diagnosis and prognostics) has been the top development priority, since PHM models’ performance directly impacts how much business value the PHM models can bring. However, recent research work in other domains, e.g., computer vision (CV), has shown that machine learning (ML) models, especially deep learning models, are vulnerable to adversarial attacks… 

Figures and Tables from this paper

References

SHOWING 1-10 OF 25 REFERENCES
Overcoming Adversarial Perturbations in Data-driven Prognostics Through Semantic Structural Context-driven Deep Learning
TLDR
This work finds that it can introduce obvious errors in prognostics by adding imperceptible noise to a normal input and that the hybrid model with randomization and structural contexts is more robust to adversarial perturbations than the conventional deep neural network.
One Pixel Attack for Fooling Deep Neural Networks
TLDR
This paper proposes a novel method for generating one-pixel adversarial perturbations based on differential evolution (DE), which requires less adversarial information (a black-box attack) and can fool more types of networks due to the inherent features of DE.
Robust Physical-World Attacks on Deep Learning Visual Classification
TLDR
This work proposes a general attack algorithm, Robust Physical Perturbations (RP2), to generate robust visual adversarial perturbations under different physical conditions and shows that adversarial examples generated using RP2 achieve high targeted misclassification rates against standard-architecture road sign classifiers in the physical world under various environmental conditions, including viewpoints.
Adversarial Examples: Attacks and Defenses for Deep Learning
TLDR
The methods for generating adversarial examples for DNNs are summarized, a taxonomy of these methods is proposed, and three major challenges in adversarialExamples are discussed and the potential solutions are discussed.
Timing Attacks on Machine Learning: State of the Art
TLDR
This paper brings together the state of the art in theory and practice needed for decision timing attacks on machine learning and defense strategies against them and presents the recently proposed taxonomy for attacks onmachine learning and draws distinctions between other taxonomies.
Towards Evaluating the Robustness of Neural Networks
TLDR
It is demonstrated that defensive distillation does not significantly increase the robustness of neural networks, and three new attack algorithms are introduced that are successful on both distilled and undistilled neural networks with 100% probability are introduced.
Crafting adversarial input sequences for recurrent neural networks
TLDR
This paper investigates adversarial input sequences for recurrent neural networks processing sequential data and shows that the classes of algorithms introduced previously to craft adversarial samples misclassified by feed-forward neural networks can be adapted to recurrent Neural networks.
Adversarial Attacks on Deep Neural Networks for Time Series Classification
TLDR
The results reveal that current state-of-the-art deep learning time series classifiers are vulnerable to adversarial attacks which can have major consequences in multiple domains such as food safety and quality assurance.
Adversarial examples in the physical world
TLDR
It is found that a large fraction of adversarial examples are classified incorrectly even when perceived through the camera, which shows that even in physical world scenarios, machine learning systems are vulnerable to adversarialExamples.
Explaining and Harnessing Adversarial Examples
TLDR
It is argued that the primary cause of neural networks' vulnerability to adversarial perturbation is their linear nature, supported by new quantitative results while giving the first explanation of the most intriguing fact about them: their generalization across architectures and training sets.
...
1
2
3
...