• Corpus ID: 227013585

Adversarial collision attacks on image hashing functions

@article{Dolhansky2020AdversarialCA,
  title={Adversarial collision attacks on image hashing functions},
  author={Brian Dolhansky and Cristian Canton-Ferrer},
  journal={ArXiv},
  year={2020},
  volume={abs/2011.09473}
}
Hashing images with a perceptual algorithm is a common approach to solving duplicate image detection problems. However, perceptual image hashing algorithms are differentiable, and are thus vulnerable to gradient-based adversarial attacks. We demonstrate that not only is it possible to modify an image to produce an unrelated hash, but an exact image hash collision between a source and target image can be produced via minuscule adversarial perturbations. In a white box setting, these collisions… 

Figures and Tables from this paper

AdvHash: Set-to-set Targeted Attack on Deep Hashing with One Single Adversarial Patch
TLDR
This paper proposes AdvHash, the first targeted mismatch attack on deep hashing through adversarial patch and proposes a product-based weighted gradient aggregation strategy to dynamically adjust the gradient directions of the patch, by exploiting the Hamming distances between training samples and the target anchor hash code.
Squint Hard Enough: Evaluating Perceptual Hashing with Machine Learning
TLDR
The results show that it is possible to efficiently generate targeted second-preimage attacks in which an attacker creates a variant of some source image that matches some target digest, and it is likely insufficiently robust to survive attacks on this new setting.
Do You See What I See? Capabilities and Limits of Automated Multimedia Content Analysis
TLDR
The capabilities and limitations of tools for analyzing online multimedia content are explained and the potential risks of using these tools at scale without accounting for their limitations are highlighted.
Outside Looking In: Approaches to Content Moderation in End-to-End Encrypted Systems
TLDR
It is found that technical approaches for user-reporting and meta-data analysis are the most likely to preserve privacy and security guarantees for end- Users in End-to-End Encryption services.
Authentication of Art NFTs
TLDR
This paper proposes a decentralized trust network system for verifying NFTs using a chain of authentication methods ranging from automatic machine checking to manual expert curation.
ARIA: Adversarially Robust Image Attribution for Content Provenance
TLDR
This work illustrates how to generate valid adversarial images that can easily cause incorrect image attribution, and describes an approach to prevent imperceptible adversarial attacks on deep visual fingerprinting models, via robust contrastive learning.
Adversarial Detection Avoidance Attacks: Evaluating the robustness of perceptual hashing-based client-side scanning
TLDR
A large-scale evaluation shows perceptual hashing-based client-side scanning mechanisms to be highly vulnerable to detection avoidance attacks in a black-box setting, with more than 99.9% of images successfully attacked while preserving the content of the image.
Learning to Break Deep Perceptual Hashing: The Use Case NeuralHash
TLDR
It is shown that current deep perceptual hashing may not be robust, which means it is generally not ready for robust client-side scanning and should not be used from a privacy perspective.

References

SHOWING 1-10 OF 51 REFERENCES
Adversarial Examples for Hamming Space Search
TLDR
This work proposes hash adversary generation (HAG), a novel method of crafting adversarial examples for Hamming space search whose nearest neighbors from a targeted hashing model are semantically irrelevant to the original queries.
Robust image hashing
TLDR
A novel image indexing technique that may be called an image hash function, which uses randomized signal processing strategies for a non-reversible compression of images into random binary strings, and is shown to be robust against image changes due to compression, geometric distortions, and other attacks.
Robust perceptual image hashing via matrix invariants
TLDR
These algorithms first construct a secondary image, derived from input image by pseudo-randomly extracting features that approximately capture semi-global geometric characteristics, and propose novel hashing algorithms employing transforms that are based on matrix invariants.
Perceptual Image Hashing Via Feature Points: Performance Evaluation and Tradeoffs
TLDR
The proposed image hashing paradigm using visually significant feature points is proposed, which withstands standard benchmark attacks, including compression, geometric distortions of scaling and small-angle rotation, and common signal-processing operations.
ForBild: efficient robust image hashing
TLDR
This work discusses and evaluates the behavior of an optimized block-based hash, a well-known approach sharing characteristics of both cryptographic hashes and image identification methods that is fast, robust to common image processing and features low false alarm rates.
New Iterative Geometric Methods for Robust Perceptual Image Hashing
TLDR
This work proposes a novel and robust hashing paradigm that uses iterative geometric techniques and relies on observations that main geometric features within an image would approximately stay invariant under small perturbations, thereby yielding properties akin to cryptographic MACs.
Countering Adversarial Images using Input Transformations
TLDR
This paper investigates strategies that defend against adversarial-example attacks on image-classification systems by transforming the inputs before feeding them to the system, and shows that total variance minimization and image quilting are very effective defenses in practice, when the network is trained on transformed images.
Black-box Adversarial Attacks with Limited Queries and Information
TLDR
This work defines three realistic threat models that more accurately characterize many real-world classifiers: the query-limited setting, the partial-information setting, and the label-only setting and develops new attacks that fool classifiers under these more restrictive threat models.
Adversarial Preprocessing: Understanding and Preventing Image-Scaling Attacks in Machine Learning
TLDR
This paper theoretically analyzes the attacks against image scaling from the perspective of signal processing and identifies their root cause as the interplay of downsampling and convolution, and develops a novel defense against image-scaling attacks that prevents all possible attack variants.
Deep Supervised Hashing for Fast Image Retrieval
TLDR
A novel Deep Supervised Hashing method to learn compact similarity-preserving binary code for the huge body of image data and extensive experiments show the promising performance of the method compared with the state-of-the-arts.
...
1
2
3
4
5
...