I Know What You See: Power Side-Channel Attack on Convolutional Neural Network Accelerators

@article{Wei2018IKW,
  title={I Know What You See: Power Side-Channel Attack on Convolutional Neural Network Accelerators},
  author={Lingxiao Wei and Yannan Liu and Bo Luo and Yu LI and Qiang Xu},
  journal={Proceedings of the 34th Annual Computer Security Applications Conference},
  year={2018}
}
  • Lingxiao Wei, Yannan Liu, +2 authors Q. Xu
  • Published 5 March 2018
  • Computer Science
  • Proceedings of the 34th Annual Computer Security Applications Conference
Deep learning has become the de-facto computational paradigm for various kinds of perception problems, including many privacy-sensitive applications such as online medical image analysis. No doubt to say, the data privacy of these deep learning systems is a serious concern. Different from previous research focusing on exploiting privacy leakage from deep learning models, in this paper, we present the first attack on the implementation of deep learning models. To be specific, we perform the… Expand
How Secure are Deep Learning Algorithms from Side-Channel based Reverse Engineering?*
TLDR
This paper provides an evaluation strategy for information leakages through DNN by considering a case study on CNN classifier and utilizes low-level hardware information provided by Hardware Performance Counters and hypothesis testing during the execution of a CNN to produce alarms if there exists any information leakage on actual input. Expand
Floating-Point Multiplication Timing Attack on Deep Neural Network
TLDR
A new timing side-channel attack, called FPMT attack, is presented, to recover the input images of a DNN implemented on microcontrollers by exploiting the running time of floating-point multiplications. Expand
Physical Side-Channel Attacks on Embedded Neural Networks: A Survey
During the last decade, Deep Neural Networks (DNN) have progressively been integrated on all types of platforms, from data centers to embedded systems including low-power processors and, recently,Expand
Leaky Nets: Recovering Embedded Neural Network Models and Inputs through Simple Power and Timing Side-Channels - Attacks and Defenses
TLDR
This work studies the side-channel vulnerabilities of embedded neural network implementations by recovering their parameters using timing-based information leakage and simple power analysis side- channel attacks and is able to recover not only the model parameters but also the inputs for the above networks. Expand
Security of Neural Networks from Hardware Perspective: A Survey and Beyond
TLDR
The security challenges and opportunities in the computing hardware used in implementing deep neural networks (DNN) are surveyed and ample opportunities for hardware based research to secure the next generation of DNN-based artificial intelligence and machine learning platforms are found. Expand
Leaky DNN: Stealing Deep-Learning Model Secret with GPU Context-Switching Side-Channel
TLDR
This work investigates to what extent the secret of deep-learning models can be inferred by attackers, and exploits the GPU side-channel based on context-switching penalties to extract the fine-grained structural secret of a DNN model. Expand
Open DNN Box by Power Side-Channel Attack
TLDR
A side-channel information based technique to reveal the internal information of black-box models and shows that the experimental results suggest that the security problem of many AI devices should be paid strong attention, and corresponding defensive strategies in the future are proposed. Expand
MaskedNet: A Pathway for Secure Inference against Power Side-Channel Attacks
TLDR
This paper shows DPA attacks on classifiers that can extract the secret model parameters such as weights and biases of a neural network and proposes the first countermeasures against these attacks by augmenting masking. Expand
Power Side-Channel Attacks on BNN Accelerators in Remote FPGAs
TLDR
A remote, power-based side-channel attack on a deep neural network accelerator running in a variety of Xilinx FPGAs and also on Cloud FPGA using Amazon Web Services (AWS) F1 instances is presented. Expand
CSI Neural Network: Using Side-channels to Recover Your Artificial Neural Network Information
TLDR
This work investigates how to reverse engineer a neural network by using only power side-channel information and shows that once the attacker has the knowledge about the neural network architecture, he/she could also recover the inputs to the network with only a single-shot measurement. Expand
...
1
2
3
4
5
...

References

SHOWING 1-10 OF 64 REFERENCES
Reverse engineering convolutional neural networks through side-channel information leaks
TLDR
This study shows that even with data encryption, the adversary can infer the underlying network structure by exploiting the memory and timing side-channels, and reveals the importance of hiding off-chip memory access pattern to truly protect confidential CNN models. Expand
Accelerating Binarized Convolutional Neural Networks with Software-Programmable FPGAs
TLDR
The design of a BNN accelerator is presented that is synthesized from C++ to FPGA-targeted Verilog and outperforms existing FPGAs-based CNN accelerators in GOPS as well as energy and resource efficiency. Expand
Going Deeper with Embedded FPGA Platform for Convolutional Neural Network
TLDR
This paper presents an in-depth analysis of state-of-the-art CNN models and shows that Convolutional layers are computational-centric and Fully-Connected layers are memory-centric, and proposes a CNN accelerator design on embedded FPGA for Image-Net large-scale image classification. Expand
Distillation as a Defense to Adversarial Perturbations Against Deep Neural Networks
TLDR
The study shows that defensive distillation can reduce effectiveness of sample creation from 95% to less than 0.5% on a studied DNN, and analytically investigates the generalizability and robustness properties granted by the use of defensive Distillation when training DNNs. Expand
SecureML: A System for Scalable Privacy-Preserving Machine Learning
TLDR
This paper presents new and efficient protocols for privacy preserving machine learning for linear regression, logistic regression and neural network training using the stochastic gradient descent method, and implements the first privacy preserving system for training neural networks. Expand
Building a Side Channel Based Disassembler
TLDR
This work presents the first complete methodology to recover the program code of a microcontroller by evaluating its power consumption only and exploits side channel information to recover large parts of the program executed on an embedded processor. Expand
Frequency Domain Acceleration of Convolutional Neural Networks on CPU-FPGA Shared Memory System
TLDR
A novel mechanism to accelerate state-of-art Convolutional Neural Networks (CNNs) on CPU-FPGA platform with coherent shared memory and exploits the data parallelism of OaA-based 2D convolver and task parallelism to scale the overall system performance. Expand
Model Inversion Attacks that Exploit Confidence Information and Basic Countermeasures
TLDR
A new class of model inversion attack is developed that exploits confidence values revealed along with predictions and is able to estimate whether a respondent in a lifestyle survey admitted to cheating on their significant other and recover recognizable images of people's faces given only their name. Expand
The Limitations of Deep Learning in Adversarial Settings
TLDR
This work formalizes the space of adversaries against deep neural networks (DNNs) and introduces a novel class of algorithms to craft adversarial samples based on a precise understanding of the mapping between inputs and outputs of DNNs. Expand
Binarized Neural Networks
TLDR
A binary matrix multiplication GPU kernel is written with which it is possible to run the MNIST BNN 7 times faster than with an unoptimized GPU kernel, without suffering any loss in classification accuracy. Expand
...
1
2
3
4
5
...