• Corpus ID: 253107924

Privacy Vulnerability of Split Computing to Data-Free Model Inversion Attacks

@inproceedings{Dong2021PrivacyVO,
  title={Privacy Vulnerability of Split Computing to Data-Free Model Inversion Attacks},
  author={Xin Dong and Hongxu Yin and Jos{\'e} Manuel {\'A}lvarez and Jan Kautz and Pavlo Molchanov and H. T. Kung},
  year={2021}
}
Mobile edge devices see increased demands in deep neural networks (DNNs) inference while suffering from stringent constraints in computing resources. Split computing (SC) emerges as a popular approach to the issue by executing only initial layers on devices and offloading the remaining to the cloud. Prior works usually assume that SC offers privacy benefits as only intermediate features, instead of private data, are shared from devices to the cloud. In this work, we debunk this SC-induced privacy… 

References

SHOWING 1-10 OF 99 REFERENCES

Model inversion attacks against collaborative inference

A new set of attacks to compromise the inference data privacy in collaborative deep learning systems, where one malicious participant can accurately recover an arbitrary input fed into this system, even if he has no access to other participants' data or computations.

The Secret Revealer: Generative Model-Inversion Attacks Against Deep Neural Networks

It is theoretically prove that a model's predictive power and its vulnerability to inversion attacks are indeed two sides of the same coin, and highly predictive models are able to establish a strong correlation between features and labels, which coincides exactly with what an adversary exploits to mount the attacks.

Attacking and Protecting Data Privacy in Edge–Cloud Collaborative Inference Systems

This article devise a set of new attacks for an untrusted cloud to recover arbitrary inputs fed into the system, even if the attacker has no access to the edge device’s data or computations, or permissions to query this system, and propose two more effective defense methods.

Inverting Gradients - How easy is it to break privacy in federated learning?

It is shown that is is actually possible to faithfully reconstruct images at high resolution from the knowledge of their parameter gradients, and it is demonstrated that such a break of privacy is possible even for trained deep networks.

R-GAP: Recursive Gradient Attack on Privacy

This research provides a closed-form recursive procedure to recover data from gradients in deep neural networks and proposes a rank analysis method, which can be used to estimate a network architecture's risk of a gradient attack.

Data-Free Network Quantization With Adversarial Knowledge Distillation

This paper proposes data-free adversarial knowledge distillation, which minimizes the maximum distance between the outputs of the teacher and the (quantized) student for any adversarial samples from a generator.

No Peek: A Survey of private distributed deep learning

The distributed deep learning methods of federated learning, split learning and large batch stochastic gradient descent are compared in addition to private and secure approaches of differential privacy, homomorphic encryption, oblivious transfer and garbled circuits in the context of neural networks.

Feature Space Hijacking Attacks against Differentially Private Split Learning

This work’s contribution is applying a recent feature space hijacking attack (FSHA) to the learning process of a split neural network enhanced with differential privacy (DP), using a client-side off-the-shelf DP optimizer.

Generative Low-bitwidth Data Free Quantization

This paper proposes a knowledge matching generator to produce meaningful fake data by exploiting classification boundary knowledge and distribution information in the pre-trained model with much higher accuracy on 4-bit quantization than the existing data free quantization method.

Plug & Play Attacks: Towards Robust and Flexible Model Inversion Attacks

Plug & Play Attacks are presented, which relax the dependency between the target model and image prior, and enable the use of a single GAN to attack a wide range of targets, requiring only minor adjustments to the attack.
...