Adversarial Robustness by Design Through Analog Computing And Synthetic Gradients

@article{Cappelli2021AdversarialRB,
  title={Adversarial Robustness by Design Through Analog Computing And Synthetic Gradients},
  author={Alessandro Cappelli and Ruben Ohana and Julien Launay and Laurent Meunier and Iacopo Poli and Florent Krzakala},
  journal={ICASSP 2022 - 2022 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
  year={2021},
  pages={3493-3497}
}
  • Alessandro CappelliRuben Ohana F. Krzakala
  • Published 6 January 2021
  • Computer Science, Mathematics
  • ICASSP 2022 - 2022 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)
We propose a new defense mechanism against adversarial at-tacks inspired by an optical co-processor, providing robustness without compromising natural accuracy in both white-box and black-box settings. This hardware co-processor performs a nonlinear fixed random transformation, where the parameters are unknown and impossible to retrieve with sufficient precision for large enough dimensions. In the white-box setting, our defense works by obfuscating the parameters of the random projection… 

Figures from this paper

ROPUST: Improving Robustness through Fine-tuning with Photonic Processors and Synthetic Gradients

This work introduces ROPUST, a remarkably simple and efficient method to leverage robust pre-trained models and further increase their robustness, at no cost in natural accuracy, and introduces phase retrieval attacks, specifically designed to increase the threat level of attackers against the authors' own defense.

On Noise Stability and Robustness of Adversarially Trained Networks on NVM Crossbars

The design of robust DNNs is explored through the amalgamation of adversarial training and the intrinsic robustness offered by NVM crossbar-based analog hardware, indicating that implementing adversarially trained networks on analog hardware requires careful calibration between hardware nonidealities.

Benchmarking the Accuracy and Robustness of Feedback Alignment Algorithms

BioTorch is presented, a software framework to create, train, and benchmark biologically motivated neural networks, and the performance of several feedback alignment methods proposed in the literature is investigated, thereby unveiling the importance of the forward and backward weight initialization and optimizer choice.

Sensors & Transducers Learning Binary Data Representation for Optical Processing Units

This paper proposes to use shallow and deep neural networks (NN) as binary encoders to perform input data binarization and shows that this method outperforms alternative unsupervised and supervisedbinarization techniques.

Physical deep learning with biologically inspired training method: gradient-free approach for physical hardware

This work presents physical deep learning by extending a biologically inspired training algorithm called direct feedback alignment, based on random projection with alternative nonlinear activation, which can train a physical neural network without knowledge about the physical system and its gradient.

Walking Noise: Understanding Implications of Noisy Computations on Classification Tasks

A method to measure the layer-specific robustness is proposed and shares insights on how networks learn to compensate injected noise, and thus, contributes to understand robustness against noisy computations.

LightOn Optical Processing Unit : Scaling-up AI and HPC with a Non von Neumann co-processor

Beyond pure Von Neumann processing Scalability of AI / HPC models is limited by the Von Neumann bottleneck for accessing massive amounts of memory, driving up power consumption.

Scaling Laws Beyond Backpropagation

It is found that DFA fails to offer more efficient scaling than backpropagation: there is never a regime for which the degradation in loss incurred by using DFA is worth the potential reduction in compute budget.

References

SHOWING 1-10 OF 30 REFERENCES

Scaling provable adversarial defenses

This paper presents a technique for extending these training procedures to much more general networks, with skip connections and general nonlinearities, and shows how to further improve robust error through cascade models.

Certified Robustness to Adversarial Examples with Differential Privacy

This paper presents the first certified defense that both scales to large networks and datasets and applies broadly to arbitrary model types, based on a novel connection between robustness against adversarial examples and differential privacy, a cryptographically-inspired privacy formalism.

ZOO: Zeroth Order Optimization Based Black-box Attacks to Deep Neural Networks without Training Substitute Models

An effective black-box attack that also only has access to the input (images) and the output (confidence scores) of a targeted DNN is proposed, sparing the need for training substitute models and avoiding the loss in attack transferability.

Yet another but more efficient black-box adversarial attack: tiling and evolution strategies

A new black-box attack achieving state of the art performances based on a new objective function, borrowing ideas from $\ell_\infty$-white box attacks, and particularly designed to fit derivative-free optimization requirements.

Parsimonious Black-Box Adversarial Attacks via Efficient Combinatorial Optimization

This work proposes an efficient discrete surrogate to the optimization problem which does not require estimating the gradient and consequently becomes free of the first order update hyperparameters to tune.

Towards Deep Learning Models Resistant to Adversarial Attacks

This work studies the adversarial robustness of neural networks through the lens of robust optimization, and suggests the notion of security against a first-order adversary as a natural and broad security guarantee.

Provable defenses against adversarial examples via the convex outer adversarial polytope

A method to learn deep ReLU-based classifiers that are provably robust against norm-bounded adversarial perturbations, and it is shown that the dual problem to this linear program can be represented itself as a deep network similar to the backpropagation network, leading to very efficient optimization approaches that produce guaranteed bounds on the robust loss.

Black-box Adversarial Attacks with Limited Queries and Information

This work defines three realistic threat models that more accurately characterize many real-world classifiers: the query-limited setting, the partial-information setting, and the label-only setting and develops new attacks that fool classifiers under these more restrictive threat models.

Prior Convictions: Black-Box Adversarial Attacks with Bandits and Priors

A framework that conceptually unifies much of the existing work on black-box attacks is introduced, and it is demonstrated that the current state-of-the-art methods are optimal in a natural sense.

Square Attack: a query-efficient black-box adversarial attack via random search

The Square Attack is a score-based black-box attack that does not rely on local gradient information and thus is not affected by gradient masking, and can outperform gradient-based white-box attacks on the standard benchmarks achieving a new state-of-the-art in terms of the success rate.