@article{Cappelli2021AdversarialRB,
author={Alessandro Cappelli and Ruben Ohana and Julien Launay and Laurent Meunier and Iacopo Poli and Florent Krzakala},
journal={ICASSP 2022 - 2022 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
year={2021},
pages={3493-3497}
}
• Published 6 January 2021
• Computer Science, Mathematics
• ICASSP 2022 - 2022 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)
We propose a new defense mechanism against adversarial at-tacks inspired by an optical co-processor, providing robustness without compromising natural accuracy in both white-box and black-box settings. This hardware co-processor performs a nonlinear fixed random transformation, where the parameters are unknown and impossible to retrieve with sufficient precision for large enough dimensions. In the white-box setting, our defense works by obfuscating the parameters of the random projection…
8 Citations

## Figures from this paper

• Computer Science
ArXiv
• 2021
This work introduces ROPUST, a remarkably simple and efficient method to leverage robust pre-trained models and further increase their robustness, at no cost in natural accuracy, and introduces phase retrieval attacks, specifically designed to increase the threat level of attackers against the authors' own defense.
• Computer Science
IEEE Transactions on Very Large Scale Integration (VLSI) Systems
• 2022
The design of robust DNNs is explored through the amalgamation of adversarial training and the intrinsic robustness offered by NVM crossbar-based analog hardware, indicating that implementing adversarially trained networks on analog hardware requires careful calibration between hardware nonidealities.
• Computer Science
ArXiv
• 2021
BioTorch is presented, a software framework to create, train, and benchmark biologically motivated neural networks, and the performance of several feedback alignment methods proposed in the literature is investigated, thereby unveiling the importance of the forward and backward weight initialization and optimizer choice.
• Computer Science
• 2022
This paper proposes to use shallow and deep neural networks (NN) as binary encoders to perform input data binarization and shows that this method outperforms alternative unsupervised and supervisedbinarization techniques.
• Computer Science
Nature Communications
• 2022
This work presents physical deep learning by extending a biologically inspired training algorithm called direct feedback alignment, based on random projection with alternative nonlinear activation, which can train a physical neural network without knowledge about the physical system and its gradient.
• Computer Science
ArXiv
• 2022
A method to measure the layer-speciﬁc robustness is proposed and shares insights on how networks learn to compensate injected noise, and thus, contributes to understand robustness against noisy computations.
• Computer Science
2021 IEEE Hot Chips 33 Symposium (HCS)
• 2021
Beyond pure Von Neumann processing Scalability of AI / HPC models is limited by the Von Neumann bottleneck for accessing massive amounts of memory, driving up power consumption.
• Computer Science
ArXiv
• 2022
It is found that DFA fails to offer more efﬁcient scaling than backpropagation: there is never a regime for which the degradation in loss incurred by using DFA is worth the potential reduction in compute budget.

## References

SHOWING 1-10 OF 30 REFERENCES

• Computer Science
NeurIPS
• 2018
This paper presents a technique for extending these training procedures to much more general networks, with skip connections and general nonlinearities, and shows how to further improve robust error through cascade models.
• Computer Science
2019 IEEE Symposium on Security and Privacy (SP)
• 2019
This paper presents the first certified defense that both scales to large networks and datasets and applies broadly to arbitrary model types, based on a novel connection between robustness against adversarial examples and differential privacy, a cryptographically-inspired privacy formalism.
• Computer Science
AISec@CCS
• 2017
An effective black-box attack that also only has access to the input (images) and the output (confidence scores) of a targeted DNN is proposed, sparing the need for training substitute models and avoiding the loss in attack transferability.
• Computer Science
ArXiv
• 2019
A new black-box attack achieving state of the art performances based on a new objective function, borrowing ideas from $\ell_\infty$-white box attacks, and particularly designed to fit derivative-free optimization requirements.
• Computer Science
ICML
• 2019
This work proposes an efficient discrete surrogate to the optimization problem which does not require estimating the gradient and consequently becomes free of the first order update hyperparameters to tune.
• Computer Science
ICLR
• 2018
This work studies the adversarial robustness of neural networks through the lens of robust optimization, and suggests the notion of security against a first-order adversary as a natural and broad security guarantee.
• Computer Science
ICML
• 2018
A method to learn deep ReLU-based classifiers that are provably robust against norm-bounded adversarial perturbations, and it is shown that the dual problem to this linear program can be represented itself as a deep network similar to the backpropagation network, leading to very efficient optimization approaches that produce guaranteed bounds on the robust loss.
• Computer Science
ICML
• 2018
This work defines three realistic threat models that more accurately characterize many real-world classifiers: the query-limited setting, the partial-information setting, and the label-only setting and develops new attacks that fool classifiers under these more restrictive threat models.
• Computer Science
ICLR
• 2019
A framework that conceptually unifies much of the existing work on black-box attacks is introduced, and it is demonstrated that the current state-of-the-art methods are optimal in a natural sense.
• Computer Science
ECCV
• 2020
The Square Attack is a score-based black-box attack that does not rely on local gradient information and thus is not affected by gradient masking, and can outperform gradient-based white-box attacks on the standard benchmarks achieving a new state-of-the-art in terms of the success rate.