BoMaNet: Boolean Masking of an Entire Neural Network

@article{Dubey2020BoMaNetBM,
  title={BoMaNet: Boolean Masking of an Entire Neural Network},
  author={Anuj Dubey and Rosario Cammarota and Aydin Aysu},
  journal={2020 IEEE/ACM International Conference On Computer Aided Design (ICCAD)},
  year={2020},
  pages={1-9}
}
Recent work on stealing machine learning (ML) models from inference engines with physical side-channel attacks warrant an urgent need for effective side-channel defenses. This work proposes the first fully-masked neural network inference engine design. Masking uses secure multi-party computation to split the secrets into random shares and to decorrelate the statistical relation of secret-dependent computations to side-channels (e.g., the power draw). In this work, we construct secure hardware… 

Security of Neural Networks from Hardware Perspective: A Survey and Beyond

The security challenges and opportunities in the computing hardware used in implementing deep neural networks (DNN) are surveyed and ample opportunities for hardware based research to secure the next generation of DNN-based artificial intelligence and machine learning platforms are found.

Machine Learning and Hardware security: Challenges and Opportunities -Invited Talk-

Novel applications of machine learning for hardware security, such as evaluation of post quantum cryptography hardware and extraction of physically unclonable functions from neural networks and practical model extraction attack based on electromagnetic side-channel measurements are demonstrated.

Machine learning and hardware security: challenges and opportunities

Novel applications of machine learning for hardware security, such as evaluation of post quantum cryptography hardware and extraction of physically unclonable functions from neural networks and practical model extraction attack based on electromagnetic side-channel measurements are demonstrated.

Trustworthy AI Inference Systems: An Industry Research View

An industry research view for approaching the design, deployment, and operation of trustworthy Artificial Intelligence (AI) inference systems, which highlights opportunities and challenges in AI systems using trusted execution environments combined with more recent advances in cryptographic techniques to protect data in use.

HWGN2: Side-channel Protected Neural Networks through Secure and Private Function Evaluation

Hardware garbled NN (HWGN 2) is introduced, a DL hardware accelerator implemented on FPGA that provides NN designers with the flexibility to protect their IP in real-time applications, where hardware resources are heavily constrained, through a hardware-communication cost trade-off.

A Survey on Side-Channel-based Reverse Engineering Attacks on Deep Neural Networks

This paper surveys existing work on hardware side-channel-based reverse engineering attacks on DNNs as well as the countermeasures and proposes new strategies to defend against these attacks.

On (in)Security of Edge-based Machine Learning Against Electromagnetic Side-channels

  • S. BhasinDirmanto JapS. Picek
  • Computer Science
    2022 IEEE International Symposium on Electromagnetic Compatibility & Signal/Power Integrity (EMCSI)
  • 2022
This work surveys the research works considering electromagnetic side-channel and edge-based machine learning models, and proposes several open problems to be investigated in future research.

High-Fidelity Model Extraction Attacks via Remote Power Monitors

It is demonstrated that a remote monitor implemented with time-to-digital converters can be exploited to steal the weights from a hardware implementation of NN inference, which expands the attack vector to multi-tenant cloud FPGA platforms.

DARPT

DARPT: defense against remote physical attack based on TDC in multi-tenant scenario

This work exploits Time-to-Digital Converter (TDC) and proposes a novel defense technique called DARPT (Defense Against Remote Physical attack based on TDC) to protect sensitive information from CPA and FA.

References

SHOWING 1-10 OF 81 REFERENCES

MaskedNet: The First Hardware Inference Engine Aiming Power Side-Channel Protection

The DPA framework is shown during inference to extract the secret model parameters such as weights and biases of a neural network and the first countermeasures against these attacks by augmenting masking are proposed.

MaskedNet: A Pathway for Secure Inference against Power Side-Channel Attacks

This paper shows DPA attacks on classifiers that can extract the secret model parameters such as weights and biases of a neural network and proposes the first countermeasures against these attacks by augmenting masking.

Fault-assisted side-channel analysis of masked implementations

This paper proposes a methodology to identify the generation and integration of random masks in cryptographic software by means of side-channel analysis, and disable the randomizing effect of masking by targeted fault injection, and breaks the masking countermeasure using first-order side- channel analysis.

Gazelle: A Low Latency Framework for Secure Neural Network Inference

Gazelle is designed, a scalable and low-latency system for secure neural network inference, using an intricate combination of homomorphic encryption and traditional two-party computation techniques (such as garbled circuits).

Effect of glitches against masked AES S-box implementation and countermeasure

Detailed SPICE results are shown to support the claim that the modifications indeed reduce the vulnerability of the masked AES S-box against DPA attacks.

Preventing Neural Network Model Exfiltration in Machine Learning Hardware Accelerators

This work illustrates how an attacker may acquire either the model or the model architecture through memory probing, side-channels, or crafted input attacks, and proposes (1) power-efficient obfuscation as an alternative to encryption, and (2) timing side-channel countermeasures.

Reverse Engineering Convolutional Neural Networks Through Side-channel Information Leaks

This study shows that even with data encryption, the adversary can infer the underlying network structure by exploiting the memory and timing side-channels, and reveals the importance of hiding off-chip memory access pattern to truly protect confidential CNN models.

Slalom: Fast, Verifiable and Private Execution of Neural Networks in Trusted Hardware

Slalom is proposed, a framework that securely delegates execution of all linear layers in a DNN from a TEE to a faster, yet untrusted, co-located processor, for high performance execution of Deep Neural Networks in TEEs.

Hardware Private Circuits: From Trivial Composition to Full Verification

This paper first extends the simulatability framework of Belaı̈d et al. (EUROCRYPT 2016) and proves that a compositional strategy that is correct without glitches remains valid with glitches, and proves the first masked gadgets that enable trivial composition with glitches at arbitrary orders.

A masked ring-LWE implementation

This paper presents a masked ring-LWE decryption implementation resistant to first-order side-channel attacks, and has the peculiarity that the entire computation is performed in the masked domain.
...