• Corpus ID: 236956822

ROPUST: Improving Robustness through Fine-tuning with Photonic Processors and Synthetic Gradients

@article{Cappelli2021ROPUSTIR,
  title={ROPUST: Improving Robustness through Fine-tuning with Photonic Processors and Synthetic Gradients},
  author={Alessandro Cappelli and Julien Launay and Laurent Meunier and Ruben Ohana and Iacopo Poli},
  journal={ArXiv},
  year={2021},
  volume={abs/2108.04217}
}
Robustness to adversarial attacks is typically obtained through expensive adversarial training with Projected Gradient Descent. Here we introduce ROPUST, a remarkably simple and efficient method to leverage robust pre-trained models and further increase their robustness, at no cost in natural accuracy. Our technique relies on the use of an Optical Processing Unit (OPU), a photonic co-processor, and a fine-tuning step performed with Direct Feedback Alignment, a synthetic gradient training scheme… 

Figures from this paper

Benchmarking the Accuracy and Robustness of Feedback Alignment Algorithms

TLDR
BioTorch is presented, a software framework to create, train, and benchmark biologically motivated neural networks, and the performance of several feedback alignment methods proposed in the literature is investigated, thereby unveiling the importance of the forward and backward weight initialization and optimizer choice.

References

SHOWING 1-10 OF 58 REFERENCES

Adversarial Robustness by Design Through Analog Computing And Synthetic Gradients

TLDR
A new defense mechanism against adversarial at-tacks inspired by an optical co-processor is proposed, providing robustness without compromising natural accuracy in both white-box and black-box settings, and the same approach is suboptimal if employed to generate adversarial examples.

Fast is better than free: Revisiting adversarial training

TLDR
It is made the surprising discovery that it is possible to train empirically robust models using a much weaker and cheaper adversary, an approach that was previously believed to be ineffective, rendering the method no more costly than standard training in practice.

A Simple Fine-tuning Is All You Need: Towards Robust Deep Learning Via Adversarial Fine-tuning

TLDR
The proposed adversarial fine-tuning approach enables the ability to improve the robustness of any pre-trained deep neural network without needing to train the model from scratch, which to the best of the authors’ knowledge has not been previously demonstrated in research literature.

Scaling provable adversarial defenses

TLDR
This paper presents a technique for extending these training procedures to much more general networks, with skip connections and general nonlinearities, and shows how to further improve robust error through cascade models.

Reliable evaluation of adversarial robustness with an ensemble of diverse parameter-free attacks

TLDR
Two extensions of the PGD-attack overcoming failures due to suboptimal step size and problems of the objective function are proposed and combined with two complementary existing ones to form a parameter-free, computationally affordable and user-independent ensemble of attacks to test adversarial robustness.

Adversarial Training and Robustness for Multiple Perturbations

TLDR
It is proved that a trade-off in robustness to different types of $\ell_p$-bounded and spatial perturbations must exist in a natural and simple statistical setting, and questioned the viability and computational scalability of extending adversarial robustness, and adversarial training, to multiple perturbation types.

RobustBench: a standardized adversarial robustness benchmark

TLDR
This work evaluates robustness of models for their benchmark with AutoAttack, an ensemble of white- and black-box attacks which was recently shown in a large-scale study to improve almost all robustness evaluations compared to the original publications.

Minimally distorted Adversarial Examples with a Fast Adaptive Boundary Attack

TLDR
A new white-box adversarial attack for neural networks-based classifiers aiming at finding the minimal perturbation necessary to change the class of a given input, which performs better or similar to state-of-the-art attacks which are partially specialized to one $l_p$-norm, and is robust to the phenomenon of gradient masking.

Yet another but more efficient black-box adversarial attack: tiling and evolution strategies

TLDR
A new black-box attack achieving state of the art performances based on a new objective function, borrowing ideas from $\ell_\infty$-white box attacks, and particularly designed to fit derivative-free optimization requirements.

ZOO: Zeroth Order Optimization Based Black-box Attacks to Deep Neural Networks without Training Substitute Models

TLDR
An effective black-box attack that also only has access to the input (images) and the output (confidence scores) of a targeted DNN is proposed, sparing the need for training substitute models and avoiding the loss in attack transferability.
...