Corpus ID: 59842889

Adversarial Initialization - when your network performs the way I want

@article{Grosse2019AdversarialI,
  title={Adversarial Initialization - when your network performs the way I want},
  author={Kathrin Grosse and Thomas Alexander Trost and Marius Mosbach and Michael Backes and Dietrich Klakow},
  journal={ArXiv},
  year={2019},
  volume={abs/1902.03020}
}
The increase in computational power and available data has fueled a wide deployment of deep learning in production environments. Despite their successes, deep architectures are still poorly understood and costly to train. We demonstrate in this paper how a simple recipe enables a market player to harm or delay the development of a competing product. Such a threat model is novel and has not been considered so far. We derive the corresponding attacks and show their efficacy both formally and… Expand
Universal Adversarial Audio Perturbations
TLDR
It is demonstrated the existence of universal adversarial perturbations, which can fool a family of audio classification architectures, for both targeted and untargeted attack scenarios, and a proof that the proposed penalty method theoretically converges to a solution that corresponds to universal adversaries. Expand
Salvaging Federated Learning by Local Adaptation
TLDR
This work shows that on standard tasks such as next-word prediction, many participants gain no benefit from FL, and shows that differential privacy and robust aggregation make this problem worse by further destroying the accuracy of the federated model for many participants. Expand
RAM-Jam: Remote Temperature and Voltage Fault Attack on FPGAs using Memory Collisions
TLDR
A novel remote fault attack, called RAM-Jam, is presented, which exploits an existing weakness in the dual port RAMs of mainstream FPGAs, which leads to severe voltage drops and excessive heat that result in timing faults as well as bit-flips in the FPGA's configuration memory. Expand
Correlated Initialization for Correlated Data
TLDR
The theoretical analysis reveals for uncorrelated initialization that flow through layers suffers from much more rapid decrease and training of individual parameters is subject to more ``zig-zagging''. Expand

References

SHOWING 1-10 OF 43 REFERENCES
Towards Reverse-Engineering Black-Box Neural Networks
TLDR
A method for exposing internals of black-box models is proposed and it is shown that the method is surprisingly effective at inferring a diverse set of internal information, which can be exploited to strengthen adversarial examples against the model. Expand
Summoning Demons: The Pursuit of Exploitable Bugs in Machine Learning
TLDR
This paper characterize the attack surface of ML programs, and it is shown that malicious inputs exploiting implementation bugs enable strictly more powerful attacks than the classic adversarial machine learning techniques. Expand
How to Start Training: The Effect of Initialization and Architecture
TLDR
This work identifies two common failure modes for early training in which the mean and variance of activations are poorly behaved and gives a rigorous proof of when it occurs at initialization and how to avoid it. Expand
Adversarial Dropout for Supervised and Semi-supervised Learning
TLDR
The identified adversarial dropout are used to reconfigure the neural network to train, and it is demonstrated that training on the reconfigured sub-network improves the generalization performance of supervised and semi-supervised learning tasks on MNIST and CIFAR-10. Expand
Trojaning Attack on Neural Networks
TLDR
A trojaning attack on neuron networks that can be successfully triggered without affecting its test accuracy for normal input data, and it only takes a small amount of time to attack a complex neuron network model. Expand
Wild Patterns: Ten Years After the Rise of Adversarial Machine Learning
TLDR
This tutorial introduces the fundamentals of adversarial machine learning to the security community, and presents novel techniques that have been recently proposed to assess performance of pattern classifiers and deep learning algorithms under attack, evaluate their vulnerabilities, and implement defense strategies that make learning algorithms more robust to attacks. Expand
All you need is a good init
TLDR
Performance is evaluated on GoogLeNet, CaffeNet, FitNets and Residual nets and the state-of-the-art, or very close to it, is achieved on the MNIST, CIFAR-10/100 and ImageNet datasets. Expand
Fault injection attack on deep neural network
TLDR
This paper investigates the impact of fault injection attacks on DNN, wherein attackers try to misclassify a specified input pattern into an adversarial class by modifying the parameters used in DNN via fault injection. Expand
Stealing Machine Learning Models via Prediction APIs
TLDR
Simple, efficient attacks are shown that extract target ML models with near-perfect fidelity for popular model classes including logistic regression, neural networks, and decision trees against the online services of BigML and Amazon Machine Learning. Expand
Deep Neural Networks with Random Gaussian Weights: A Universal Classification Strategy?
TLDR
It is formally proved that these networks with random Gaussian weights perform a distance-preserving embedding of the data, with a special treatment for in-class and out-of-class data. Expand
...
1
2
3
4
5
...