# Incorrect by Construction: Fine Tuning Neural Networks for Guaranteed Performance on Finite Sets of Examples

@article{Papusha2020IncorrectBC, title={Incorrect by Construction: Fine Tuning Neural Networks for Guaranteed Performance on Finite Sets of Examples}, author={Ivan Papusha and Rosa Wu and Joshua Brul'e and Yanni Kouskoulas and Daniel Genin and Aurora C. Schmidt}, journal={ArXiv}, year={2020}, volume={abs/2008.01204} }

There is great interest in using formal methods to guarantee the reliability of deep neural networks. However, these techniques may also be used to implant carefully selected input-output pairs. We present initial results on a novel technique for using SMT solvers to fine tune the weights of a ReLU neural network to guarantee outcomes on a finite set of particular examples. This procedure can be used to ensure performance on key examples, but it could also be used to insert difficult-to-find…

## 4 Citations

### Automated Repair of Neural Networks

- Computer ScienceArXiv
- 2022

This work provides an algorithm to automatically repair NNs given safety properties, and suggests a few heuristics to improve its computational performance.

### Safety Analysis of Deep Neural Networks

- Computer ScienceIJCAI
- 2021

Some of the recent efforts in verification and repair of DNN models in safety-critical domains, including reinforcement learning, are presented.

### Verification and Repair of Neural Networks

- Computer ScienceAAAI
- 2021

Some of the recent efforts in verifying neural networks, a popular machine learning models which have found successful application in many different domains across computer science.

### Future Defining Innovations: Trustworthy Autonomous Systems

- Computer Science
- 2021

Intelligent systems are already having a remarkable impact on society. Future advancements could have an even greater impact by empowering people through human–machine teaming, addressing challenges…

## References

SHOWING 1-10 OF 23 REFERENCES

### A Unified View of Piecewise Linear Neural Network Verification

- Computer ScienceNeurIPS
- 2018

A unified framework that encompasses previous methods is presented and the identification of new methods that combine the strengths of multiple existing approaches are identified, accomplishing a speedup of two orders of magnitude compared to the previous state of the art.

### Reluplex: An Efficient SMT Solver for Verifying Deep Neural Networks

- Computer ScienceCAV
- 2017

Results show that the novel, scalable, and efficient technique presented can successfully prove properties of networks that are an order of magnitude larger than the largest networks verified using existing methods.

### Evaluating Robustness of Neural Networks with Mixed Integer Programming

- Computer ScienceICLR
- 2019

Verification of piecewise-linear neural networks as a mixed integer program that is able to certify more samples than the state-of-the-art and find more adversarial examples than a strong first-order attack for every network.

### Explaining and Harnessing Adversarial Examples

- Computer ScienceICLR
- 2015

It is argued that the primary cause of neural networks' vulnerability to adversarial perturbation is their linear nature, supported by new quantitative results while giving the first explanation of the most intriguing fact about them: their generalization across architectures and training sets.

### Formal Verification of Piece-Wise Linear Feed-Forward Neural Networks

- Computer ScienceATVA
- 2017

An approach for the verification of feed-forward neural networks in which all nodes have a piece-wise linear activation function and infers additional node phases for the non-linear nodes in the network from partial node phase assignments, similar to unit propagation in classical SAT solving.

### Safety Verification of Deep Neural Networks

- Computer ScienceCAV
- 2017

A novel automated verification framework for feed-forward multi-layer neural networks based on Satisfiability Modulo Theory (SMT) is developed, which defines safety for an individual decision in terms of invariance of the classification within a small neighbourhood of the original image.

### Provably Minimally-Distorted Adversarial Examples

- Computer Science
- 2017

It is demonstrated that one of the recent ICLR defense proposals, adversarial retraining, provably succeeds at increasing the distortion required to construct adversarial examples by a factor of 4.2.

### Sherlock - A tool for verification of neural network feedback systems: demo abstract

- Computer ScienceHSCC
- 2019

This work presents an approach for the synthesis and verification of neural network controllers for closed loop dynamical systems, modelled as an ordinary differential equation, and incorporates counter examples or bad traces into the synthesis phase of the controller.

### Affine Multiplexing Networks: System Analysis, Learning, and Computation

- Computer Science
- 2018

We introduce a novel architecture and computational framework for formal, automated analysis of systems with a broad set of nonlinearities in the feedback loop, such as neural networks, vision…

### BadNets: Identifying Vulnerabilities in the Machine Learning Model Supply Chain

- Computer ScienceArXiv
- 2017

It is shown that outsourced training introduces new security risks: an adversary can create a maliciously trained network (a backdoored neural network, or a BadNet) that has state-of-the-art performance on the user's training and validation samples, but behaves badly on specific attacker-chosen inputs.