Robust and Verifiable Information Embedding Attacks to Deep Neural Networks via Error-Correcting Codes

@article{Jia2021RobustAV,
  title={Robust and Verifiable Information Embedding Attacks to Deep Neural Networks via Error-Correcting Codes},
  author={Jinyuan Jia and Binghui Wang and N. Gong},
  journal={Proceedings of the 2021 ACM Asia Conference on Computer and Communications Security},
  year={2021}
}
In the era of deep learning, a user often leverages a third-party machine learning tool to train a deep neural network (DNN) classifier and then deploys the classifier as an end-user software product (e.g., a mobile app) or a cloud service. In an information embedding attack, an attacker is the provider of a malicious third-party machine learning tool. The attacker embeds a message into the DNN classifier during training and recovers the message via querying the API of the black-box classifier… Expand

Figures and Tables from this paper

References

SHOWING 1-10 OF 10 REFERENCES
Protecting Intellectual Property of Deep Neural Networks with Watermarking
Fine-Pruning: Defending Against Backdooring Attacks on Deep Neural Networks
Turning Your Weakness Into a Strength: Watermarking Deep Neural Networks by Backdooring
Hacking smart machines with smarter ones: How to extract meaningful data from machine learning classifiers
Machine Learning Models that Remember Too Much
Deep Residual Learning for Image Recognition
Learning both Weights and Connections for Efficient Neural Network
Pruning Filters for Efficient ConvNets
Near Shannon limit error-correcting coding and decoding: Turbo-codes. 1