SpoC: Spoofing Camera Fingerprints

  title={SpoC: Spoofing Camera Fingerprints},
  author={Davide Cozzolino and Justus Thies and Andreas R{\"o}ssler and Matthias Nie{\ss}ner and Luisa Verdoliva},
  journal={2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW)},
Thanks to the fast progress in synthetic media generation, creating realistic false images has become very easy. Such images can be used to wrap rich fake news with enhanced credibility, spawning a new wave of high-impact, high-risk misinformation campaigns. Therefore, there is a fast-growing interest in reliable detectors of manipulated media. The most powerful detectors, to date, rely on the subtle traces left by any device on all images acquired by it. In particular, due to proprietary in… 

Figures and Tables from this paper

Media Forensics and DeepFakes: An Overview

  • L. Verdoliva
  • Computer Science
    IEEE Journal of Selected Topics in Signal Processing
  • 2020
This review paper aims to present an analysis of the methods for visual media integrity verification, that is, the detection of manipulated images and videos, with special emphasis on the emerging phenomenon of deepfakes, fake media created through deep learning tools, and on modern data-driven forensic methods to fight them.

Conditional Adversarial Camera Model Anonymization

This work augments the objective with the loss from a pre-trained dual-stream model attribution classifier, which constrains the generative network to transform the full range of artifacts in a restrictive non-interactive black-box setting.

Preliminary Forensics Analysis of DeepFake Images

A preliminary idea on how to fight Deepfake images of faces will be presented by analysing anomalies in the frequency domain by using standard methods to identify fakeness in images.

Poster: Towards Robust Open-World Detection of Deepfakes

This project proposes a system that will robustly and efficiently enable users to determine whether or not a video posted online is a deepfake, and demonstrates accurate detection on both within and mismatched datasets.

An Overview of Recent Work in Multimedia Forensics

In this paper, we review recent work in media forensics for digital images, video, audio, and documents.

Misleading Deep-Fake Detection with GAN Fingerprints

A novel class of simple counterattacks is introduced that can remove indicative artifacts, the GAN fingerprint, directly from the frequency spectrum of a generated image and thus evade the detection of generated images.

An Overview of Recent Work in Media Forensics: Methods and Threats

For each data modality, synthesis and manipulation techniques that can be used to create and modify digital media are discussed and technological advancements for detecting and quantifying such manipulations are reviewed.

Deterring Deepfake Attacks with an Electrical Network Frequency Fingerprints Approach

This paper proposes a novel approach that tackles the challenging problem of detecting deepfaked AVS data leveraging Electrical Network Frequency (ENF) signals embedded in theAVS data as a fingerprint using a Singular Spectrum Analysis (SSA) approach.

Audio-Visual Person-of-Interest DeepFake Detection

This work extracts high-level audio-visual biometric features which character-ize the identity of a person, and uses them to create a person-of-interest (POI) deepfake detector that can cope with the wide variety of manipulation methods and scenarios encountered in the real world.



Noiseprint: A CNN-Based Camera Model Fingerprint

This paper proposes a method to extract a camera model fingerprint, called noiseprint, where the scene content is largely suppressed and model-related artifacts are enhanced, by means of a Siamese network trained with pairs of image patches coming from the same or different cameras.

Do GANs Leave Artificial Fingerprints?

It is shown that each GAN leaves its specific fingerprint in the images it generates, just like real-world cameras mark acquired images with traces of their photo-response non-uniformity pattern.

Analysis of Adversarial Attacks against CNN-based Image Forgery Detectors

The vulnerability of CNN-based image forensics methods to adversarial attacks is analyzed, considering several detectors and several types of attack, and testing performance on a wide range of common manipulations, both easily and hardly detectable.

Detection of GAN-Generated Fake Images over Social Networks

The diffusion of fake images and videos on social networks is a fast growing problem. Commercial media editing tools allow anyone to remove, add, or clone people and objects, to generate fake images.

A Counter-Forensic Method for CNN-Based Camera Model Identification

A counter-forensic method capable of subtly altering images to change their estimated camera model when they are analyzed by any CNN-based camera model detector, which shows that even advanced deep learning architectures trained to analyze images and obtain camera model information are still vulnerable to the proposed method.

Patch-Based Desynchronization of Digital Camera Sensor Fingerprints

This paper explores image self-similarity as a means to impede forensic camera identification based on sensor noise. We follow the tradition of patch replacement attacks against robust digital

Defending Against Fingerprint-Copy Attack in Sensor-Based Camera Identification

The conclusion that can be made from this study is that planting a sensor fingerprint in an image without leaving a trace is significantly more difficult than previously thought.

Can we trust digital image forensics?

This work will take a closer look at two state-of-the-art forensic methods and proposes two counter-techniques; one to perform resampling operations undetectably and another one to forge traces of image origin.

Analysis of Seam-Carving-Based Anonymization of Images Against PRNU Noise Pattern-Based Source Attribution

An analysis of the seam-carving-based source camera anonymization method is provided by determining the limits of its performance introducing two adversarial models and shows that, for the general case, there should not be many uncarved blocks larger than the size of $50\times 50$ pixels for successful anonymization of the source camera.

Attributing Fake Images to GANs: Learning and Analyzing GAN Fingerprints

The first study of learning GAN fingerprints towards image attribution and using them to classify an image as real or GAN-generated is presented, showing that GANs carry distinct model fingerprints and leave stable fingerprints in their generated images, which support image attribution.