BoMaNet: Boolean Masking of an Entire Neural Network
@article{Dubey2020BoMaNetBM, title={BoMaNet: Boolean Masking of an Entire Neural Network}, author={Anuj Dubey and Rosario Cammarota and Aydin Aysu}, journal={2020 IEEE/ACM International Conference On Computer Aided Design (ICCAD)}, year={2020}, pages={1-9} }
Recent work on stealing machine learning (ML) models from inference engines with physical side-channel attacks warrant an urgent need for effective side-channel defenses. This work proposes the first fully-masked neural network inference engine design. Masking uses secure multi-party computation to split the secrets into random shares and to decorrelate the statistical relation of secret-dependent computations to side-channels (e.g., the power draw). In this work, we construct secure hardware…
Figures and Tables from this paper
20 Citations
Security of Neural Networks from Hardware Perspective: A Survey and Beyond
- Computer Science2021 26th Asia and South Pacific Design Automation Conference (ASP-DAC)
- 2021
The security challenges and opportunities in the computing hardware used in implementing deep neural networks (DNN) are surveyed and ample opportunities for hardware based research to secure the next generation of DNN-based artificial intelligence and machine learning platforms are found.
Machine Learning and Hardware security: Challenges and Opportunities -Invited Talk-
- Computer Science2020 IEEE/ACM International Conference On Computer Aided Design (ICCAD)
- 2020
Novel applications of machine learning for hardware security, such as evaluation of post quantum cryptography hardware and extraction of physically unclonable functions from neural networks and practical model extraction attack based on electromagnetic side-channel measurements are demonstrated.
Machine learning and hardware security: challenges and opportunities
- Computer ScienceICCAD 2020
- 2020
Novel applications of machine learning for hardware security, such as evaluation of post quantum cryptography hardware and extraction of physically unclonable functions from neural networks and practical model extraction attack based on electromagnetic side-channel measurements are demonstrated.
Trustworthy AI Inference Systems: An Industry Research View
- Computer ScienceArXiv
- 2020
An industry research view for approaching the design, deployment, and operation of trustworthy Artificial Intelligence (AI) inference systems, which highlights opportunities and challenges in AI systems using trusted execution environments combined with more recent advances in cryptographic techniques to protect data in use.
HWGN2: Side-channel Protected Neural Networks through Secure and Private Function Evaluation
- Computer ScienceArXiv
- 2022
Hardware garbled NN (HWGN 2) is introduced, a DL hardware accelerator implemented on FPGA that provides NN designers with the flexibility to protect their IP in real-time applications, where hardware resources are heavily constrained, through a hardware-communication cost trade-off.
A Survey on Side-Channel-based Reverse Engineering Attacks on Deep Neural Networks
- Computer Science, Mathematics2022 IEEE 4th International Conference on Artificial Intelligence Circuits and Systems (AICAS)
- 2022
This paper surveys existing work on hardware side-channel-based reverse engineering attacks on DNNs as well as the countermeasures and proposes new strategies to defend against these attacks.
A Threshold Implementation-Based Neural Network Accelerator With Power and Electromagnetic Side-Channel Countermeasures
- Computer ScienceIEEE Journal of Solid-State Circuits
- 2023
A threshold implementation (TI) masking-based NN accelerator that secures model parameters and inputs against power and electromagnetic side-channel attacks and against horizontal power analysis (HPA) attacks is introduced.
On (in)Security of Edge-based Machine Learning Against Electromagnetic Side-channels
- Computer Science2022 IEEE International Symposium on Electromagnetic Compatibility & Signal/Power Integrity (EMCSI)
- 2022
This work surveys the research works considering electromagnetic side-channel and edge-based machine learning models, and proposes several open problems to be investigated in future research.
High-Fidelity Model Extraction Attacks via Remote Power Monitors
- Computer Science, Mathematics2022 IEEE 4th International Conference on Artificial Intelligence Circuits and Systems (AICAS)
- 2022
It is demonstrated that a remote monitor implemented with time-to-digital converters can be exploited to steal the weights from a hardware implementation of NN inference, which expands the attack vector to multi-tenant cloud FPGA platforms.
References
SHOWING 1-10 OF 81 REFERENCES
MaskedNet: The First Hardware Inference Engine Aiming Power Side-Channel Protection
- Computer Science, Mathematics2020 IEEE International Symposium on Hardware Oriented Security and Trust (HOST)
- 2020
The DPA framework is shown during inference to extract the secret model parameters such as weights and biases of a neural network and the first countermeasures against these attacks by augmenting masking are proposed.
MaskedNet: A Pathway for Secure Inference against Power Side-Channel Attacks
- Computer Science, MathematicsArXiv
- 2019
This paper shows DPA attacks on classifiers that can extract the secret model parameters such as weights and biases of a neural network and proposes the first countermeasures against these attacks by augmenting masking.
Fault-assisted side-channel analysis of masked implementations
- Computer Science, Mathematics2018 IEEE International Symposium on Hardware Oriented Security and Trust (HOST)
- 2018
This paper proposes a methodology to identify the generation and integration of random masks in cryptographic software by means of side-channel analysis, and disable the randomizing effect of masking by targeted fault injection, and breaks the masking countermeasure using first-order side- channel analysis.
Gazelle: A Low Latency Framework for Secure Neural Network Inference
- Computer Science, MathematicsIACR Cryptol. ePrint Arch.
- 2018
Gazelle is designed, a scalable and low-latency system for secure neural network inference, using an intricate combination of homomorphic encryption and traditional two-party computation techniques (such as garbled circuits).
Effect of glitches against masked AES S-box implementation and countermeasure
- Computer Science, MathematicsIET Inf. Secur.
- 2009
Detailed SPICE results are shown to support the claim that the modifications indeed reduce the vulnerability of the masked AES S-box against DPA attacks.
Preventing Neural Network Model Exfiltration in Machine Learning Hardware Accelerators
- Computer Science2018 Asian Hardware Oriented Security and Trust Symposium (AsianHOST)
- 2018
This work illustrates how an attacker may acquire either the model or the model architecture through memory probing, side-channels, or crafted input attacks, and proposes (1) power-efficient obfuscation as an alternative to encryption, and (2) timing side-channel countermeasures.
Reverse Engineering Convolutional Neural Networks Through Side-channel Information Leaks
- Computer Science2018 55th ACM/ESDA/IEEE Design Automation Conference (DAC)
- 2018
This study shows that even with data encryption, the adversary can infer the underlying network structure by exploiting the memory and timing side-channels, and reveals the importance of hiding off-chip memory access pattern to truly protect confidential CNN models.
Slalom: Fast, Verifiable and Private Execution of Neural Networks in Trusted Hardware
- Computer ScienceICLR
- 2019
Slalom is proposed, a framework that securely delegates execution of all linear layers in a DNN from a TEE to a faster, yet untrusted, co-located processor, for high performance execution of Deep Neural Networks in TEEs.
Hardware Private Circuits: From Trivial Composition to Full Verification
- Computer Science, MathematicsIEEE Transactions on Computers
- 2021
This paper first extends the simulatability framework of Belaı̈d et al. (EUROCRYPT 2016) and proves that a compositional strategy that is correct without glitches remains valid with glitches, and proves the first masked gadgets that enable trivial composition with glitches at arbitrary orders.
A masked ring-LWE implementation
- Computer Science, MathematicsIACR Cryptol. ePrint Arch.
- 2015
This paper presents a masked ring-LWE decryption implementation resistant to first-order side-channel attacks, and has the peculiarity that the entire computation is performed in the masked domain.