Corpus ID: 237513609

Can one hear the shape of a neural network?: Snooping the GPU via Magnetic Side Channel

@article{Maia2021CanOH,
  title={Can one hear the shape of a neural network?: Snooping the GPU via Magnetic Side Channel},
  author={Henrique Teles Maia and Chang Xiao and Dingzeyu Li and Eitan Grinspun and Changxi Zheng},
  journal={ArXiv},
  year={2021},
  volume={abs/2109.07395}
}
  • H. Maia, Chang Xiao, +2 authors Changxi Zheng
  • Published 15 September 2021
  • Computer Science
  • ArXiv
Neural network applications have become popular in both enterprise and personal settings. Network solutions are tuned meticulously for each task, and designs that can robustly resolve queries end up in high demand. As the commercial value of accurate and performant machine learning models increases, so too does the demand to protect neural architectures as confidential investments. We explore the vulnerability of neural networks deployed as black boxes across accelerated hardware through… Expand

Figures and Tables from this paper

References

SHOWING 1-10 OF 53 REFERENCES
Stealing Neural Networks via Timing Side Channels
TLDR
A black box Neural Network extraction attack is proposed by exploiting the timing side channels to infer the depth of the network and the proposed approach is scalable and independent of type of Neural Network architectures. Expand
Cache Telepathy: Leveraging Shared Resource Attacks to Learn DNN Architectures
TLDR
Cache Telepathy is presented: a fast and accurate mechanism to steal a DNN's architecture using the cache side channel and is effective in helping obtain the architectures by very substantially reducing the search space of target DNN architectures. Expand
I Know What You See: Power Side-Channel Attack on Convolutional Neural Network Accelerators
TLDR
This paper presents the first attack on the implementation of deep learning models using an FPGA-based convolutional neural network accelerator and manages to recover the input image from the collected power traces without knowing the detailed parameters in the neural network. Expand
CSI NN: Reverse Engineering of Neural Network Architectures Through Electromagnetic Side Channel
TLDR
This work investigates how to reverse engineer a neural network by using side-channel information such as timing and electromagnetic emanations, and assumes multilayer perceptron and convolutional neural networks as the machine learning architectures of choice and a non-invasive and passive attacker capable of measuring those kinds of leakages. Expand
Reverse Engineering Convolutional Neural Networks Through Side-channel Information Leaks
TLDR
This study shows that even with data encryption, the adversary can infer the underlying network structure by exploiting the memory and timing side-channels, and reveals the importance of hiding off-chip memory access pattern to truly protect confidential CNN models. Expand
DeepSniffer: A DNN Model Extraction Framework Based on Learning Architectural Hints
TLDR
DeepSniffer is proposed, a learning-based model extraction framework to obtain the complete model architecture information without any prior knowledge of the victim model, and achieves a high accuracy of model extraction and thus improves the adversarial attack success rate. Expand
MaskedNet: The First Hardware Inference Engine Aiming Power Side-Channel Protection
TLDR
The DPA framework is shown during inference to extract the secret model parameters such as weights and biases of a neural network and the first countermeasures against these attacks by augmenting masking are proposed. Expand
DeepEM: Deep Neural Networks Model Recovery through EM Side-Channel Information Leakage
TLDR
The experimental results show that the proposed attack approach can accurately recover the large-scale NN through EM side-channel information leakages, and highlights the importance of masking EM traces for large- scale NN accelerators in real-world applications. Expand
Open DNN Box by Power Side-Channel Attack
TLDR
A side-channel information based technique to reveal the internal information of black-box models and shows that the experimental results suggest that the security problem of many AI devices should be paid strong attention, and corresponding defensive strategies in the future are proposed. Expand
One trace is all it takes: Machine Learning-based Side-channel Attack on EdDSA
TLDR
This paper considers several machine learning techniques in order to mount a power analysis attack on EdDSA using the curve Curve25519 as implemented in WolfSSL, showing all considered techniques to be viable and powerful options. Expand
...
1
2
3
4
5
...