• Corpus ID: 253107924

Privacy Vulnerability of Split Computing to Data-Free Model Inversion Attacks

@inproceedings{Dong2021PrivacyVO,
  title={Privacy Vulnerability of Split Computing to Data-Free Model Inversion Attacks},
  author={Xin Dong and Hongxu Yin and Jos{\'e} Manuel {\'A}lvarez and Jan Kautz and Pavlo Molchanov and H. T. Kung},
  booktitle={British Machine Vision Conference},
  year={2021}
}
Mobile edge devices see increased demands in deep neural networks (DNNs) inference while suffering from stringent constraints in computing resources. Split computing (SC) emerges as a popular approach to the issue by executing only initial layers on devices and offloading the remaining to the cloud. Prior works usually assume that SC offers privacy benefits as only intermediate features, instead of private data, are shared from devices to the cloud. In this work, we debunk this SC-induced… 
1 Citations

High-Resolution GAN Inversion for Degraded Images in Large Diverse Datasets

A novel GAN inversion framework that utilizes the powerful generative ability of StyleGAN-XL for generating high-quality natural images from diverse degraded inputs is presented, and to the best knowledge, it is the first to adopt Stylegan-XL.

References

SHOWING 1-10 OF 99 REFERENCES

Model inversion attacks against collaborative inference

A new set of attacks to compromise the inference data privacy in collaborative deep learning systems, where one malicious participant can accurately recover an arbitrary input fed into this system, even if he has no access to other participants' data or computations.

Attacking and Protecting Data Privacy in Edge–Cloud Collaborative Inference Systems

This article devise a set of new attacks for an untrusted cloud to recover arbitrary inputs fed into the system, even if the attacker has no access to the edge device’s data or computations, or permissions to query this system, and propose two more effective defense methods.

The Secret Revealer: Generative Model-Inversion Attacks Against Deep Neural Networks

It is theoretically prove that a model's predictive power and its vulnerability to inversion attacks are indeed two sides of the same coin, and highly predictive models are able to establish a strong correlation between features and labels, which coincides exactly with what an adversary exploits to mount the attacks.

R-GAP: Recursive Gradient Attack on Privacy

This research provides a closed-form recursive procedure to recover data from gradients in deep neural networks and proposes a rank analysis method, which can be used to estimate a network architecture's risk of a gradient attack.

Auditing Privacy Defenses in Federated Learning via Generative Gradient Leakage

This work validate that the private training data can still be leaked under certain defense settings with a new type of leakage, i.e., Generative Gradient Leakage (GGL), and leverages the latent space of generative adversarial networks learned from public image datasets as a prior to compensate for the informational loss during gradient degradation.

Feature Space Hijacking Attacks against Differentially Private Split Learning

This work’s contribution is applying a recent feature space hijacking attack (FSHA) to the learning process of a split neural network enhanced with differential privacy (DP), using a client-side off-the-shelf DP optimizer.

Data-Free Network Quantization With Adversarial Knowledge Distillation

This paper proposes data-free adversarial knowledge distillation, which minimizes the maximum distance between the outputs of the teacher and the (quantized) student for any adversarial samples from a generator.

No Peek: A Survey of private distributed deep learning

The distributed deep learning methods of federated learning, split learning and large batch stochastic gradient descent are compared in addition to private and secure approaches of differential privacy, homomorphic encryption, oblivious transfer and garbled circuits in the context of neural networks.

Generative Low-bitwidth Data Free Quantization

This paper proposes a knowledge matching generator to produce meaningful fake data by exploiting classification boundary knowledge and distribution information in the pre-trained model with much higher accuracy on 4-bit quantization than the existing data free quantization method.

Deep compressive offloading: speeding up neural network inference by trading edge computation for network latency

A deep compressive offloading system to serve state-of-the-art computer vision and speech recognition services and can consistently reduce end-to-end latency by 2X to 4X with 1% accuracy loss, compared to state of theart neural network offloading systems.
...