# HEMET: A Homomorphic-Encryption-Friendly Privacy-Preserving Mobile Neural Network Architecture

@inproceedings{Lou2021HEMETAH, title={HEMET: A Homomorphic-Encryption-Friendly Privacy-Preserving Mobile Neural Network Architecture}, author={Qian Lou and Lei Jiang}, booktitle={International Conference on Machine Learning}, year={2021} }

Recently Homomorphic Encryption (HE) is used to implement Privacy-Preserving Neural Networks (PPNNs) that perform inferences directly on encrypted data without decryption. Prior PPNNs adopt mobile network architectures such as SqueezeNet for smaller computing overhead, but we find naïvely using mobile network architectures for a PPNN does not necessarily achieve shorter inference latency. Despite having less parameters, a mobile network architecture typically introduces more layers and…

## 15 Citations

### HE-PEx: Efficient Machine Learning under Homomorphic Encryption using Pruning, Permutation and Expansion

- Computer ScienceArXiv
- 2022

This work proposes a novel set of pruning methods that reduce the latency and memory requirement, thus bringing the effectiveness of plaintextPruning methods to HE, and demonstrates the advantage of the method on fully connected layers where the weights are packed using a recently proposed packing technique called tile tensors, which allows executing deep NN inference in a non-interactive mode.

### HeLayers: A Tile Tensors Framework for Large Neural Networks on Encrypted Data

- Computer ScienceProceedings on Privacy Enhancing Technologies
- 2023

A simple and intuitive framework is presented that abstracts the packing decision for the user, and is used to implement an inference operation over an encrypted HE-friendly AlexNet neural network with large inputs, which runs in around five minutes, several orders of magnitude faster than other state-of-the-art non-interactive HE solutions.

### PPDL - Privacy Preserving Deep Learning Using Homomorphic Encryption

- Computer ScienceCOMAD/CODS
- 2022

This work explores a class of privacy preserving machine learning technique called Fully Homomorphic Encryption in enabling CNN inference on encrypted real-world dataset and achieves the end goal of enabling encrypted inference for binary classification on melanoma dataset using Cheon-Kim- Kim-Song (CKKS) encryption scheme available in the open-source HElib library.

### Optimizing Homomorphic Encryption based Secure Image Analytics

- Computer Science, Mathematics2021 IEEE 23rd International Workshop on Multimedia Signal Processing (MMSP)
- 2021

The experiments indicate that efficient ciphertext packing schemes, model optimization and multi-threading strategies play a critical role in determining the throughput and latency of the inference process.

### CHE: Channel-Wise Homomorphic Encryption for Ciphertext Inference in Convolutional Neural Network

- Computer ScienceIEEE Access
- 2022

This work aims to improve the performance of the image classification of HE-based PPDL by combining two approaches — Channel-wise Homomorphic Encryption (CHE) and Batch Normalization (BN) with coefficient merging by providing complete and reproducible descriptions of these schemes.

### AESPA: Accuracy Preserving Low-degree Polynomial Activation for Fast Private Inference

- Computer Science, MathematicsArXiv
- 2022

This paper proposes an accuracy preserving low-degree polynomial activation function (AESPA) that exploits the Hermite expansion of the ReLU and basis-wise normalization that applies to popular ML models, such as VGGNet, ResNet, and pre-activation ResNet to show classification accuracy comparable to those of the standard models with ReLU activation.

### Serpens: Privacy-Preserving Inference through Conditional Separable of Convolutional Neural Networks

- Computer ScienceCIKM
- 2022

This work finds that the inference procedure of CNNs can be separated and performed synergistically by many parties, and presents a pair of novel notions, namely separable and conditional separable, to tell whether a layer in CNNs could be exactly computed over multiple parties or not.

### Private and Reliable Neural Network Inference

- Computer ScienceCCS
- 2022

This work presents the first system which enables privacy-preserving NN inference with robustness and fairness guarantees in a system called Phoenix, and is believed to be the first work which bridges the areas of client data privacy and reliability guarantees for NNs.

### CryptoNite: Revealing the Pitfalls of End-to-End Private Inference at Scale

- Computer ScienceArXiv
- 2021

A rigorous end-to-end characterization of PI protocols and optimization techniques finds that the current understanding of PI performance is overly optimistic and proposes a modified PI protocol that significantly reduces client-side storage costs for a small increase in online latency.

### SIMC 2.0: Improved Secure ML Inference Against Malicious Clients

- Computer ScienceArXiv
- 2022

This paper proposes SIMC 2.0, which complies with the underlying structure of SIMC, but signiﬁcantly optimizes both the linear and non-linear layers of the model, and designs an alternative lightweight protocol to perform tasks that are originally allocated to the expensive GCs.

## References

SHOWING 1-10 OF 13 REFERENCES

### AutoPrivacy: Automated Layer-wise Parameter Selection for Secure Neural Network Inference

- Computer ScienceNeurIPS
- 2020

An automated layer-wise parameter selector, AutoPrivacy, that leverages deep reinforcement learning to automatically determine a set of HE parameters for each linear layer in a HPPNN is proposed, which outperforms conventional rule-based HE parameter selection policy.

### Gazelle: A Low Latency Framework for Secure Neural Network Inference

- Computer Science, MathematicsIACR Cryptol. ePrint Arch.
- 2018

Gazelle is designed, a scalable and low-latency system for secure neural network inference, using an intricate combination of homomorphic encryption and traditional two-party computation techniques (such as garbled circuits).

### CHET: an optimizing compiler for fully-homomorphic neural-network inferencing

- Computer SciencePLDI
- 2019

CHET is a domain-specific optimizing compiler designed to make the task of programming FHE applications easier, and generates homomorphic circuits that outperform expert-tuned circuits and makes it easy to switch across different encryption schemes.

### A fully homomorphic encryption scheme

- Computer Science, Mathematics
- 2009

This work designs a somewhat homomorphic "boostrappable" encryption scheme that works when the function f is the scheme's own decryption function, and shows how, through recursive self-embedding, bootstrappable encryption gives fully homomorphic encryption.

### Delphi: A Cryptographic Inference Service for Neural Networks

- Computer ScienceIACR Cryptol. ePrint Arch.
- 2020

This work designs, implements, and evaluates DELPHI, a secure prediction system that allows two parties to execute neural network inference without revealing either party’s data, and develops a hybrid cryptographic protocol that improves upon the communication and computation costs over prior work.

### EVA: an encrypted vector arithmetic language and compiler for efficient homomorphic computation

- Computer SciencePLDI
- 2020

This paper presents a new FHE language called Encrypted Vector Arithmetic (EVA), which includes an optimizing compiler that generates correct and secure FHE programs, while hiding all the complexities of the target FHE scheme.

### Fully Homomorphic Encryption and Post Quantum Cryptography

- Mathematics, Computer Science
- 2010

The next two lectures will describe a somewhat simplified variant of Gentry’s construction, obtained by Martin van Dijk, Craig Gentry, Shai Halevi and Vinod Vaikuntanathan [vDGHV10].

### SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and <1MB model size

- Computer ScienceArXiv
- 2016

This work proposes a small DNN architecture called SqueezeNet, which achieves AlexNet-level accuracy on ImageNet with 50x fewer parameters and is able to compress to less than 0.5MB (510x smaller than AlexNet).

### Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift

- Computer ScienceICML
- 2015

Applied to a state-of-the-art image classification model, Batch Normalization achieves the same accuracy with 14 times fewer training steps, and beats the original model by a significant margin.

### Inception-v4, Inception-ResNet and the Impact of Residual Connections on Learning

- Computer ScienceAAAI
- 2017

Clear empirical evidence that training with residual connections accelerates the training of Inception networks significantly is given and several new streamlined architectures for both residual and non-residual Inception Networks are presented.