• Corpus ID: 233004517

Defending Against Image Corruptions Through Adversarial Augmentations

  title={Defending Against Image Corruptions Through Adversarial Augmentations},
  author={Dan Andrei Calian and Florian Stimberg and Olivia Wiles and Sylvestre-Alvise Rebuffi and Andr'as Gyorgy and Timothy A. Mann and Sven Gowal},
Modern neural networks excel at image classification, yet they remain vulnerable to common image corruptions such as blur, speckle noise or fog. Recent methods that focus on this problem, such as AugMix and DeepAugment, introduce defenses that operate in expectation over a distribution of image corruptions. In contrast, the literature on `p-norm bounded perturbations focuses on defenses against worst-case corruptions. In this work, we reconcile both approaches by proposing AdversarialAugment, a… 

Enhance the Visual Representation via Discrete Adversarial Training

Discrete Adversarial Training (DAT) is proposed, a plug-and-play technique for enhancing the visual representation that achieves significant improvement on multiple tasks including image classification, object detection and self-supervised learning.

On Certifying and Improving Generalization to Unseen Domains

This work demonstrates the effectiveness of a universal certification framework based on distributionally robust optimization (DRO) that enables a data-independent evaluation of a DG method complementary to the empirical evaluations on benchmark datasets and proposes a training algorithm that can be used with any DG method to provably improve their certi fied performance.

GSmooth: Certified Robustness against Semantic Transformations via Generalized Randomized Smoothing

Under the GSmooth framework, a scalable algorithm that uses a surrogate image-to-image network to approximate the complex transformation and the surrogate model provides a powerful tool for studying the properties of semantic transformations and certifying robustness.

Improving Corruption and Adversarial Robustness by Enhancing Weak Subnets

It is shown that the proposed novel robust training method, EWS, greatly improves the robustness against corrupted images as well as the accuracy on clean data, and is complementary to many state-of-the-art data augmentation approaches.

Improving Robustness by Enhancing Weak Subnets

Results indicate that improving the performance of subnets through EWS boosts clean and corrupted error across a range of state-of-the-art data augmentation schemes.

Benchmarking Robustness of 3D Point Cloud Recognition Against Common Corruptions

This paper presents ModelNet40-C, the first comprehensive benchmark on 3D point cloud corruption robustness, consisting of 15 common and realistic corruptions, and unveils that Transformer-based architectures with proper training recipes achieve the strongest robustness.

An Assessment of Robustness for Adversarial Attacks and Physical Distortions on Image Classification using Explainable AI

The study reveals, that when the inference gets adversarial samples, the necessary pixel attributes for the prediction captured by the network vary everywhere in the image, however, when the network is re-trained using adversarial training or data transformation-based augmentation, it will be able to capture pixel attributes within the particular object or reduce the capture of negative pixel attributes.

PRIME: A Few Primitives Can Boost Robustness to Common Corruptions

This work proposes PRIME, a general data augmentation scheme that relies on simple yet rich families of max-entropy image transformations that outperforms the prior art in terms of corruption robustness, while its simplicity and plug-and-play nature enable combination with other methods to further boost their robustness.

AugMax: Adversarial Composition of Random Augmentations for Robust Training

A disentangled normalization module, termed DuBIN (Dual-Batch-and-Instance Normalization), is designed that disentangles the instance-wise feature heterogeneity arising from AugMax, a stronger form of data augmentation that leads to a significantly augmented input distribution which makes model training more challenging.



Perceptual Adversarial Robustness: Defense Against Unseen Threat Models

Perceptual Adversarial Training against a perceptual attack gives robustness against many other types of adversarial attacks, and is the first adversarial defense with this property.

Lossy Image Compression with Compressive Autoencoders

It is shown that minimal changes to the loss are sufficient to train deep autoencoders competitive with JPEG 2000 and outperforming recently proposed approaches based on RNNs, and furthermore computationally efficient thanks to a sub-pixel architecture, which makes it suitable for high-resolution images.

Identity Mappings in Deep Residual Networks

The propagation formulations behind the residual building blocks suggest that the forward and backward signals can be directly propagated from one block to any other block, when using identity mappings as the skip connections and after-addition activation.

Explaining and Harnessing Adversarial Examples

It is argued that the primary cause of neural networks' vulnerability to adversarial perturbation is their linear nature, supported by new quantitative results while giving the first explanation of the most intriguing fact about them: their generalization across architectures and training sets.

Image quality assessment: from error visibility to structural similarity

A structural similarity index is developed and its promise is demonstrated through a set of intuitive examples, as well as comparison to both subjective ratings and state-of-the-art objective methods on a database of images compressed with JPEG and JPEG2000.

The Many Faces of Robustness: A Critical Analysis of Out-of-Distribution Generalization

It is found that using larger models and artificial data augmentations can improve robustness on real-world distribution shifts, contrary to claims in prior work.

AugMix: A Simple Data Processing Method to Improve Robustness and Uncertainty

AugMix significantly improves robustness and uncertainty measures on challenging image classification benchmarks, closing the gap between previous methods and the best possible performance in some cases by more than half.

Invariant Risk Minimization

This work introduces Invariant Risk Minimization, a learning paradigm to estimate invariant correlations across multiple training distributions and shows how the invariances learned by IRM relate to the causal structures governing the data and enable out-of-distribution generalization.

Enhanced Deep Residual Networks for Single Image Super-Resolution

This paper develops an enhanced deep super-resolution network (EDSR) with performance exceeding those of current state-of-the-art SR methods, and proposes a new multi-scale deepsuper-resolution system (MDSR) and training method, which can reconstruct high-resolution images of different upscaling factors in a single model.

Statistical learning theory

Presenting a method for determining the necessary and sufficient conditions for consistency of learning process, the author covers function estimates from small data pools, applying these estimations to real-life problems, and much more.