Corpus ID: 222142219

A Panda? No, It's a Sloth: Slowdown Attacks on Adaptive Multi-Exit Neural Network Inference

@article{Hong2021APN,
  title={A Panda? No, It's a Sloth: Slowdown Attacks on Adaptive Multi-Exit Neural Network Inference},
  author={Sanghyun Hong and Yigitcan Kaya and Ionut-Vlad Modoranu and T. Dumitras},
  journal={ArXiv},
  year={2021},
  volume={abs/2010.02432}
}
Recent increases in the computational demands of deep neural networks (DNNs), combined with the observation that most input samples require only simple models, have sparked interest in $input$-$adaptive$ multi-exit architectures, such as MSDNets or Shallow-Deep Networks. These architectures enable faster inferences and could bring DNNs to low-power devices, e.g. in the Internet of Things (IoT). However, it is unknown if the computational savings provided by this approach are robust against… Expand
3 Citations

Figures and Tables from this paper

Adaptive Inference through Early-Exit Networks: Design, Challenges and Directions
TLDR
This paper decomposes the design methodology of early-exit networks to its key components and surveys the recent advances in each one of them, positioning early-exiting against other efficient inference solutions and providing insights on the current challenges and most promising future directions for research in the field. Expand
Dynamic Neural Networks: A Survey
TLDR
This survey comprehensively review this rapidly developing area of dynamic networks by dividing dynamic networks into three main categories: sample-wise dynamic models that process each sample with data-dependent architectures or parameters; spatial-wiseynamic networks that conduct adaptive computation with respect to different spatial locations of image data; and temporal-wise Dynamic networks that perform adaptive inference along the temporal dimension for sequential data. Expand
Hard to Forget: Poisoning Attacks on Certified Machine Unlearning
The right to erasure requires removal of a user’s information from data held by organizations, with rigorous interpretations extending to downstream products such as learned models. Retraining fromExpand

References

SHOWING 1-10 OF 41 REFERENCES
Towards Evaluating the Robustness of Neural Networks
TLDR
It is demonstrated that defensive distillation does not significantly increase the robustness of neural networks, and three new attack algorithms are introduced that are successful on both distilled and undistilled neural networks with 100% probability are introduced. Expand
Practical Black-Box Attacks against Machine Learning
TLDR
This work introduces the first practical demonstration of an attacker controlling a remotely hosted DNN with no such knowledge, and finds that this black-box attack strategy is capable of evading defense strategies previously found to make adversarial example crafting harder. Expand
Towards Deep Learning Models Resistant to Adversarial Attacks
TLDR
This work studies the adversarial robustness of neural networks through the lens of robust optimization, and suggests the notion of security against a first-order adversary as a natural and broad security guarantee. Expand
EAD: Elastic-Net Attacks to Deep Neural Networks via Adversarial Examples
TLDR
Elastic-net attacks to DNNs (EAD) feature $L_1$-oriented adversarial examples and include the state-of-the-art$L_2$ attack as a special case, suggesting novel insights on leveraging $L-1$ distortion in adversarial machine learning and security implications ofDNNs. Expand
Feature Squeezing: Detecting Adversarial Examples in Deep Neural Networks
TLDR
Two feature squeezing methods are explored: reducing the color bit depth of each pixel and spatial smoothing, which are inexpensive and complementary to other defenses, and can be combined in a joint detection framework to achieve high detection rates against state-of-the-art attacks. Expand
Certified Robustness to Adversarial Examples with Differential Privacy
TLDR
This paper presents the first certified defense that both scales to large networks and datasets and applies broadly to arbitrary model types, based on a novel connection between robustness against adversarial examples and differential privacy, a cryptographically-inspired privacy formalism. Expand
Shallow-Deep Networks: Understanding and Mitigating Network Overthinking
TLDR
The Shallow-Deep Network (SDN) is proposed, a generic modification to off-the-shelf DNNs that introduces internal classifiers that can mitigate the wasteful effect of overthinking with confidence-based early exits and reduce the average inference cost by more than 50% and preserve the accuracy. Expand
Adaptive deep learning model selection on embedded systems
TLDR
This paper presents an adaptive scheme to determine which DNN model to use for a given input, by considering the desired accuracy and inference time, and considers a range of influential DNN models. Expand
Adversarial Training and Robustness for Multiple Perturbations
TLDR
It is proved that a trade-off in robustness to different types of $\ell_p$-bounded and spatial perturbations must exist in a natural and simple statistical setting, and questioned the viability and computational scalability of extending adversarial robustness, and adversarial training, to multiple perturbation types. Expand
The Limitations of Deep Learning in Adversarial Settings
TLDR
This work formalizes the space of adversaries against deep neural networks (DNNs) and introduces a novel class of algorithms to craft adversarial samples based on a precise understanding of the mapping between inputs and outputs of DNNs. Expand
...
1
2
3
4
5
...