• Corpus ID: 236956822

# ROPUST: Improving Robustness through Fine-tuning with Photonic Processors and Synthetic Gradients

@article{Cappelli2021ROPUSTIR,
title={ROPUST: Improving Robustness through Fine-tuning with Photonic Processors and Synthetic Gradients},
author={Alessandro Cappelli and Julien Launay and Laurent Meunier and Ruben Ohana and Iacopo Poli},
journal={ArXiv},
year={2021},
volume={abs/2108.04217}
}
• Published 6 July 2021
• Computer Science
• ArXiv
Robustness to adversarial attacks is typically obtained through expensive adversarial training with Projected Gradient Descent. Here we introduce ROPUST, a remarkably simple and efficient method to leverage robust pre-trained models and further increase their robustness, at no cost in natural accuracy. Our technique relies on the use of an Optical Processing Unit (OPU), a photonic co-processor, and a fine-tuning step performed with Direct Feedback Alignment, a synthetic gradient training scheme…
2 Citations

## Figures from this paper

### Benchmarking the Accuracy and Robustness of Feedback Alignment Algorithms

• Computer Science
ArXiv
• 2021
BioTorch is presented, a software framework to create, train, and benchmark biologically motivated neural networks, and the performance of several feedback alignment methods proposed in the literature is investigated, thereby unveiling the importance of the forward and backward weight initialization and optimizer choice.

### Scaling Laws Beyond Backpropagation

• Computer Science
ArXiv
• 2022
It is found that DFA fails to offer more efﬁcient scaling than backpropagation: there is never a regime for which the degradation in loss incurred by using DFA is worth the potential reduction in compute budget.

## References

SHOWING 1-10 OF 58 REFERENCES

• Computer Science, Mathematics
ICASSP 2022 - 2022 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)
• 2022
A new defense mechanism against adversarial at-tacks inspired by an optical co-processor is proposed, providing robustness without compromising natural accuracy in both white-box and black-box settings, and the same approach is suboptimal if employed to generate adversarial examples.

### Fast is better than free: Revisiting adversarial training

• Computer Science
ICLR
• 2020
It is made the surprising discovery that it is possible to train empirically robust models using a much weaker and cheaper adversary, an approach that was previously believed to be ineffective, rendering the method no more costly than standard training in practice.

### A Simple Fine-tuning Is All You Need: Towards Robust Deep Learning Via Adversarial Fine-tuning

• Computer Science
ArXiv
• 2020
The proposed adversarial fine-tuning approach enables the ability to improve the robustness of any pre-trained deep neural network without needing to train the model from scratch, which to the best of the authors’ knowledge has not been previously demonstrated in research literature.

• Computer Science
NeurIPS
• 2018
This paper presents a technique for extending these training procedures to much more general networks, with skip connections and general nonlinearities, and shows how to further improve robust error through cascade models.

### Reliable evaluation of adversarial robustness with an ensemble of diverse parameter-free attacks

• Computer Science
ICML
• 2020
Two extensions of the PGD-attack overcoming failures due to suboptimal step size and problems of the objective function are proposed and combined with two complementary existing ones to form a parameter-free, computationally affordable and user-independent ensemble of attacks to test adversarial robustness.

### Adversarial Training and Robustness for Multiple Perturbations

• Computer Science
NeurIPS
• 2019
It is proved that a trade-off in robustness to different types of $\ell_p$-bounded and spatial perturbations must exist in a natural and simple statistical setting, and questioned the viability and computational scalability of extending adversarial robustness, and adversarial training, to multiple perturbation types.

### RobustBench: a standardized adversarial robustness benchmark

• Computer Science
NeurIPS Datasets and Benchmarks
• 2021
This work evaluates robustness of models for their benchmark with AutoAttack, an ensemble of white- and black-box attacks which was recently shown in a large-scale study to improve almost all robustness evaluations compared to the original publications.

• Computer Science
ICML
• 2020
A new white-box adversarial attack for neural networks-based classifiers aiming at finding the minimal perturbation necessary to change the class of a given input, which performs better or similar to state-of-the-art attacks which are partially specialized to one $l_p$-norm, and is robust to the phenomenon of gradient masking.

### Yet another but more efficient black-box adversarial attack: tiling and evolution strategies

• Computer Science
ArXiv
• 2019
A new black-box attack achieving state of the art performances based on a new objective function, borrowing ideas from $\ell_\infty$-white box attacks, and particularly designed to fit derivative-free optimization requirements.

### ZOO: Zeroth Order Optimization Based Black-box Attacks to Deep Neural Networks without Training Substitute Models

• Computer Science
AISec@CCS
• 2017
An effective black-box attack that also only has access to the input (images) and the output (confidence scores) of a targeted DNN is proposed, sparing the need for training substitute models and avoiding the loss in attack transferability.