• Corpus ID: 244709418

Simple Post-Training Robustness Using Test Time Augmentations and Random Forest

@inproceedings{Cohen2021SimplePR,
  title={Simple Post-Training Robustness Using Test Time Augmentations and Random Forest},
  author={Gilad Cohen and Raja Giryes},
  year={2021}
}
Although Deep Neural Networks (DNNs) achieve excellent performance on many real-world tasks, they are highly vulnerable to adversarial attacks. A leading defense against such attacks is adversarial training, a technique in which a DNN is trained to be robust to adversarial attacks by introducing adversarial noise to its input. This procedure is effective but must be done during the training phase. In this work, we propose Augmented Random Forest (ARF), a simple and easy-to-use strategy for… 

Figures and Tables from this paper

References

SHOWING 1-10 OF 73 REFERENCES

Mitigating adversarial effects through randomization

This paper proposes to utilize randomization at inference time to mitigate adversarial effects, and uses two randomization operations: random resizing, which resizes the input images to a random size, and random padding, which pads zeros around the input image in a random manner.

Defense-GAN: Protecting Classifiers Against Adversarial Attacks Using Generative Models

The proposed Defense-GAN, a new framework leveraging the expressive capability of generative models to defend deep neural networks against adversarial perturbations, is empirically shown to be consistently effective against different attack methods and improves on existing defense strategies.

Robust Pre-Training by Adversarial Contrastive Learning

This work improves robustness-aware self-supervised pre-training by learning representations that are consistent under both data augmentations and adversarial perturbations, and shows that ACL pre- training can improve semi- supervised adversarial training, even when only a few labeled examples are available.

Feature Denoising for Improving Adversarial Robustness

It is suggested that adversarial perturbations on images lead to noise in the features constructed by these networks, and new network architectures are developed that increase adversarial robustness by performing feature denoising.

Improving DNN Robustness to Adversarial Attacks using Jacobian Regularization

This work suggests a theoretically inspired novel approach to improve the networks' robustness using the Frobenius norm of the Jacobian of the network, which is applied as post-processing, after regular training has finished and demonstrates empirically that it leads to enhanced robustness results with a minimal change in the original network's accuracy.

Fast is better than free: Revisiting adversarial training

It is made the surprising discovery that it is possible to train empirically robust models using a much weaker and cheaper adversary, an approach that was previously believed to be ineffective, rendering the method no more costly than standard training in practice.

Stochastic Activation Pruning for Robust Adversarial Defense

Stochastic Activation Pruning (SAP) is proposed, a mixed strategy for adversarial defense that prunes a random subset of activations (preferentially pruning those with smaller magnitude) and scales up the survivors to compensate.

Towards Evaluating the Robustness of Neural Networks

It is demonstrated that defensive distillation does not significantly increase the robustness of neural networks, and three new attack algorithms are introduced that are successful on both distilled and undistilled neural networks with 100% probability are introduced.

Improving the Adversarial Robustness and Interpretability of Deep Neural Networks by Regularizing their Input Gradients

It is demonstrated that regularizing input gradients makes them more naturally interpretable as rationales for model predictions, and also exhibits robustness to transferred adversarial examples generated to fool all of the other models.

Detecting Adversarial Samples Using Influence Functions and Nearest Neighbors

This work trains an adversarial detector using the k-NN ranks and distances and shows that it successfully distinguishes adversarial examples, getting state-of-the-art results on six attack methods with three datasets.
...