Noise Doesn't Lie: Towards Universal Detection of Deep Inpainting

@inproceedings{Li2021NoiseDL,
  title={Noise Doesn't Lie: Towards Universal Detection of Deep Inpainting},
  author={Ang Li and Qiuhong Ke and Xingjun Ma and Haiqin Weng and Zhiyuan Zong and Feng Xue and Rui Zhang},
  booktitle={International Joint Conference on Artificial Intelligence},
  year={2021}
}
Deep image inpainting aims to restore damaged or missing regions in an image with realistic contents. While having a wide range of applications such as object removal and image recovery, deep inpainting techniques also have the risk of being manipulated for image forgery. A promising countermeasure against such forgeries is deep inpainting detection, which aims to locate the inpainted regions in an image. In this paper, we make the first attempt towards universal detection of deep inpainting… 

Figures and Tables from this paper

Image Inpainting Detection via Enriched Attentive Pattern with Near Original Image Augmentation

This work proposes near original image augmentation that pushes the inpainted images closer to the original ones (without distortion and inpainting) as the input images, which is proved to improve the detection accuracy.

Perceptual Artifacts Localization for Inpainting

A new learning task of automatic segmentation of inpainting perceptual artifacts is proposed, and a new interpretable evaluation metric called Perceptual Artifact Ratio (PAR), which is the ratio of objectionable inpainted regions to the entire inpainted area is proposed.

Robust Image Forgery Detection over Online Social Network Shared Images

A novel robust training scheme that simulates the noise introduced by the disclosed (known) operations of OSNs, and incorporates the modelled noise into a robust training framework, significantly improving the robustness of the image forgery detector.

The Change You Want to See

This paper tackles the change detection problem with the goal of detecting “object-level” changes in an image pair despite differences in their viewpoint and illumination, and proposes a scalable methodology for obtaining a large-scale change detection training dataset.

References

SHOWING 1-10 OF 27 REFERENCES

Localization of Deep Inpainting Using High-Pass Fully Convolutional Network

  • Haodong LiJiwu Huang
  • Computer Science
    2019 IEEE/CVF International Conference on Computer Vision (ICCV)
  • 2019
Extensive experimental results evaluated on both synthetic and realistic images subjected to deep inpainting have shown the effectiveness of the proposed method, which employs a fully convolutional network based on high-pass filtered image residuals.

Generative Image Inpainting with Contextual Attention

This work proposes a new deep generative model-based approach which can not only synthesize novel image structures but also explicitly utilize surrounding image features as references during network training to make better predictions.

Generative Image Inpainting with Submanifold Alignment

This work exploits Local Intrinsic Dimensionality (LID) to measure the alignment between data submanifolds learned by a GAN model and those of the original data, from a perspective of both images and local patches of images to enforce the alignment (or closeness) around restored images and those around the original (uncorrupted) images during the learning process of GAN-based inpainting models.

Localization of Diffusion-Based Inpainting in Digital Images

This paper proposes a method for the localization of diffusion-based inpainted regions in digital images based on the intra-channel and inter-channel local variances of the changes, and demonstrates the effectiveness of the proposed method on both synthetic and realistic inpainted images.

Free-Form Image Inpainting With Gated Convolution

The proposed gated convolution solves the issue of vanilla convolution that treats all input pixels as valid ones, generalizes partial convolution by providing a learnable dynamic feature selection mechanism for each channel at each spatial location across all layers.

ManTra-Net: Manipulation Tracing Network for Detection and Localization of Image Forgeries With Anomalous Features

The forgery localization problem is formulated as a local anomaly detection problem, a Z-score feature is designed to capture local anomaly, and a novel long short-term memory solution is proposed to assess local anomalies.

Context Encoders: Feature Learning by Inpainting

It is found that a context encoder learns a representation that captures not just appearance but also the semantics of visual structures, and can be used for semantic inpainting tasks, either stand-alone or as initialization for non-parametric methods.

Learning Rich Features for Image Manipulation Detection

A two-stream Faster R-CNN network is proposed and trained end-to-end to detect the tampered regions given a manipulated image and fuse features from the two streams through a bilinear pooling layer to further incorporate spatial co-occurrence of these two modalities.

Recurrent Feature Reasoning for Image Inpainting

A Recurrent Feature Reasoning (RFR) network which is mainly constructed by a plug-and-play Recurrent feature Reasoning module and a Knowledge Consistent Attention (KCA) module, which recurrently infers the hole boundaries of the convolutional feature maps and uses them as clues for further inference.