Corpus ID: 211532649

Entangled Watermarks as a Defense against Model Extraction

@article{Jia2020EntangledWA,
  title={Entangled Watermarks as a Defense against Model Extraction},
  author={Hengrui Jia and Christopher A. Choquette-Choo and Nicolas Papernot},
  journal={ArXiv},
  year={2020},
  volume={abs/2002.12200}
}
Machine learning involves expensive data collection and training procedures. Model owners may be concerned that valuable intellectual property can be leaked if adversaries mount model extraction attacks. Because it is difficult to defend against model extraction without sacrificing significant prediction accuracy, watermarking leverages unused model capacity to have the model overfit to outlier input-output pairs, which are not sampled from the task distribution and are only known to the… Expand
10 Citations
Dataset Inference: Ownership Resolution in Machine Learning
  • PDF
A Survey on Model Watermarking Neural Networks
  • 1
  • Highly Influenced
  • PDF
Model Extraction and Defenses on Generative Adversarial Networks
  • PDF
Quantifying (Hyper) Parameter Leakage in Machine Learning
  • Vasisht Duddu, D. V. Rao
  • Computer Science
  • 2020 IEEE Sixth International Conference on Multimedia Big Data (BigMM)
  • 2020
  • 1
  • PDF
A survey of deep neural network watermarking techniques
  • Yue Li, Hongxia Wang, M. Barni
  • Computer Science
  • ArXiv
  • 2021
  • PDF
Model extraction from counterfactual explanations
  • 2
  • PDF
Adversarial Watermarking Transformer: Towards Tracing Text Provenance with Data Hiding
  • PDF
Backdoor Attacks and Countermeasures on Deep Learning: A Comprehensive Review
  • 7
  • PDF
Piracy Resistant Watermarks for Deep Neural Networks.
  • 9
  • PDF

References

SHOWING 1-10 OF 45 REFERENCES
Defending Against Machine Learning Model Stealing Attacks Using Deceptive Perturbations
  • 21
  • PDF
Turning Your Weakness Into a Strength: Watermarking Deep Neural Networks by Backdooring
  • 133
  • Highly Influential
  • PDF
Protecting Intellectual Property of Deep Neural Networks with Watermarking
  • 110
  • PDF
High-Fidelity Extraction of Neural Network Models
  • 25
Practical Black-Box Attacks against Machine Learning
  • 1,623
  • PDF
Detecting Backdoor Attacks on Deep Neural Networks by Activation Clustering
  • 158
  • PDF
Stealing Machine Learning Models via Prediction APIs
  • 712
  • Highly Influential
  • PDF
Adding Robustness to Support Vector Machines Against Adversarial Reverse Engineering
  • 28
  • PDF
...
1
2
3
4
5
...