Gray-box adversarial testing for control systems with machine learning components

@article{Yaghoubi2019GrayboxAT,
  title={Gray-box adversarial testing for control systems with machine learning components},
  author={Shakiba Yaghoubi and Georgios Fainekos},
  journal={Proceedings of the 22nd ACM International Conference on Hybrid Systems: Computation and Control},
  year={2019}
}
  • Shakiba Yaghoubi, Georgios Fainekos
  • Published 31 December 2018
  • Computer Science
  • Proceedings of the 22nd ACM International Conference on Hybrid Systems: Computation and Control
Neural Networks (NN) have been proposed in the past as an effective means for both modeling and control of systems with very complex dynamics. However, despite the extensive research, NN-based controllers have not been adopted by the industry for safety critical systems. The primary reason is that systems with learning based controllers are notoriously hard to test and verify. Even harder is the analysis of such systems against system-level specifications. In this paper, we provide a gradient… 

Figures and Tables from this paper

Synthesis-guided Adversarial Scenario Generation for Gray-box Feedback Control Systems with Sensing Imperfections

TLDR
An algorithm is developed that searches for “adversarial scenarios", which can be thought of as the strategy for the adversary representing the noise and disturbances, that lead to safety violations in closed-loop systems with memoryless controllers.

Statistical verification of learning-based cyber-physical systems

TLDR
This work focuses on the use of Statistical Model Checking (SMC) for verifying complex NN-controlled CPS by using an SMC approach based on Clopper-Pearson confidence levels to verify from samples specifications that are captured by Signal Temporal Logic formulas.

Training Neural Network Controllers Using Control Barrier Functions in the Presence of Disturbances

TLDR
This work proposes to use imitation learning to learn Neural Network based feedback controllers which will satisfy the CBF constraints and develops a new class of High Order CBF for systems under external disturbances.

When Cyber-Physical Systems Meet AI: A Benchmark, an Evaluation, and a Way Forward

TLDR
This work presents a public benchmark of industry-level CPS in seven domains and builds AI controllers for them via state-of-the-art deep reinforcement learning (DRL) methods and concludes that building a hybrid system that strategically combines and switches between AI controllers and traditional controllers can achieve better performance across different domains.

A Survey of Algorithms for Black-Box Safety Validation

TLDR
This work provides a survey of state-of-the-art safety validation techniques for CPS with a focus on applied algorithms and their modifications for the safety validation problem, and discusses algorithms in the domains of optimization, path planning, reinforcement learning, and importance sampling.

Testing Deep Neural Networks

TLDR
This paper proposes a family of four novel test criteria that are tailored to structural features of DNNs and their semantics, and validated by demonstrating that the generated test inputs guided via the proposed coverage criteria are able to capture undesired behaviours in a DNN.

Verisig 2.0: Verification of Neural Network Controllers Using Taylor Model Preconditioning

TLDR
This paper focuses on NNs with tanh/sigmoid activations and develops a Taylor-model-based reachability algorithm through Taylor model preconditioning and shrink wrapping that allows Verisig 2.0 to efficiently handle larger NNs than existing tools can.

Structural Test Coverage Criteria for Deep Neural Networks

TLDR
This paper proposes a family of four novel test coverage criteria that are tailored to structural features of DNNs and their semantics, and demonstrates that the criteria achieve a balance between their ability to find bugs and the computational cost of test input generation.

Worst-case Satisfaction of STL Specifications Using Feedforward Neural Network Controllers

TLDR
A reinforcement learning approach for designing feedback neural network controllers for nonlinear systems is proposed based on a max-min formulation of the robustness of the STL formula.

References

SHOWING 1-10 OF 29 REFERENCES

Simulation-based Adversarial Test Generation for Autonomous Vehicles with Machine Learning Components

TLDR
This work presents a testing framework that is compatible with test case generation and automatic falsification methods, which are used to evaluate cyber-physical systems and can be used to increase the reliability of autonomous driving systems.

Compositional Falsification of Cyber-Physical Systems with Machine Learning Components

TLDR
A compositional falsification framework where a temporal logic falsifier and a machine learning analyzer cooperate with the aim of finding falsifying executions of the considered model to address the problem of falsifying signal temporal logic specifications for CPS with ML components.

Safety Verification of Deep Neural Networks

TLDR
A novel automated verification framework for feed-forward multi-layer neural networks based on Satisfiability Modulo Theory (SMT) is developed, which defines safety for an individual decision in terms of invariance of the classification within a small neighbourhood of the original image.

Towards Evaluating the Robustness of Neural Networks

TLDR
It is demonstrated that defensive distillation does not significantly increase the robustness of neural networks, and three new attack algorithms are introduced that are successful on both distilled and undistilled neural networks with 100% probability are introduced.

Reachability Analysis and Safety Verification for Neural Network Control Systems

TLDR
Methods for estimating the reachable set and verifying safety properties of dynamical systems under control of neural network-based controllers that may be implemented in embedded software are developed.

Output Range Analysis for Deep Neural Networks

TLDR
This paper presents an efficient range estimation algorithm that uses a combination of local search and linear programming problems to efficiently find the maximum and minimum values taken by the outputs of the NN over the given input set and demonstrates the effectiveness of the proposed approach for verification of NNs used in automated control as well as those used in classification.

Hybrid approximate gradient and stochastic descent for falsification of nonlinear systems

TLDR
This paper provides effective and practical local and global optimization strategies to falsify a smooth nonlinear system of arbitrary complexity.