DeepiSign: invisible fragile watermark to protect the integrity and authenticity of CNN

@article{Abuadbba2021DeepiSignIF,
  title={DeepiSign: invisible fragile watermark to protect the integrity and authenticity of CNN},
  author={A. Abuadbba and Hyoungshick Kim and S. Nepal},
  journal={Proceedings of the 36th Annual ACM Symposium on Applied Computing},
  year={2021}
}
Convolutional Neural Networks (CNNs) deployed in real-life applications such as autonomous vehicles have shown to be vulnerable to manipulation attacks, such as poisoning attacks and fine-tuning. Hence, it is essential to ensure the integrity and authenticity of CNNs because compromised models can produce incorrect outputs and behave maliciously. In this paper, we propose a self-contained tamper-proofing method, called DeepiSign, to ensure the integrity and authenticity of CNN models against… Expand

Figures and Tables from this paper

References

SHOWING 1-6 OF 6 REFERENCES
Turning Your Weakness Into a Strength: Watermarking Deep Neural Networks by Backdooring
  • 135
  • Highly Influential
  • PDF
DeepSigns: A Generic Watermarking Framework for IP Protection of Deep Learning Models
  • 51
  • Highly Influential
  • PDF
BadNets: Identifying Vulnerabilities in the Machine Learning Model Supply Chain
  • 386
  • Highly Influential
  • PDF
Very Deep Convolutional Networks for Large-Scale Image Recognition
  • 47,117
  • Highly Influential
  • PDF
MobileNetV2: Inverted Residuals and Linear Bottlenecks
  • 3,910
  • Highly Influential
  • PDF
Trojaning attack on neural networks. in 25nd Annual Network and Distributed System Security Symposium, NDSS 2018
  • San Diego, California,
  • 2018