• Corpus ID: 28912221

1 Adversarial Perturbations of Deep Neural Networks

@inproceedings{WardeFarley20161AP,
  title={1 Adversarial Perturbations of Deep Neural Networks},
  author={David Warde-Farley},
  year={2016}
}

Figures and Tables from this paper

ShieldNets: Defending Against Adversarial Attacks Using Probabilistic Adversarial Robustness

Experimental results show that the Probabilistic adversarial robustness approach is generalizable, robust against adversarial transferability and resistant to a wide variety of attacks on the Fashion-MNIST and CIFAR10 datasets, respectively.

Scene Privacy Protection

The proposed method, private FGSM, achieves a desirable trade-off between the drop in classification accuracy and the distortion on the private classes of the Places365-Standard dataset using ResNet50.

Adversarial Training Methods for Semi-Supervised Text Classification

This work extends adversarial and virtual adversarial training to the text domain by applying perturbations to the word embeddings in a recurrent neural network rather than to the original input itself.

On the Effectiveness of Defensive Distillation

We report experimental results indicating that defensive distillation successfully mitigates adversarial samples crafted using the fast gradient sign method, in addition to those crafted using the

Using Undervolting as an on-Device Defense Against Adversarial Machine Learning Attacks

This paper proposes a novel, lightweight adversarial correction and/or detection mechanism for image classifiers that relies on undervolting (running a chip at a voltage that is slightly below its safe margin) and shows that these errors disrupt the adversarial input in a way that can be used to correct the classification or detect the input as adversarial.

Quality Evaluation Assurance Levels for Deep Neural Networks Software

  • S. Nakajima
  • Computer Science
    2019 International Conference on Technologies and Applications of Artificial Intelligence (TAAI)
  • 2019
This paper proposes quality evaluation assurance levels, which is a basis of a third-party evaluation and certification framework for machine learning software quality, that is viewed from three perspectives: prediction performance quality, training mechanism quality, and lifecycle support quality enabling continuous operations.

Attacking Lifelong Learning Models with Gradient Reversion

A principled way for attacking A-GEM called gradient reversion (GREV) which is shown to be more effective and indicates that future lifelong learning research should bear adversarial attacks in mind to develop more robust lifelong learning algorithms.

Mangan: Assisting Colorization Of Manga Characters Concept Art Using Conditional GAN

This article proposes a semi-automatic framework for colorizing manga concept arts by letting concept artists try different color schemes and obtain colorized results in fashion time and outperforms current hint-based line-art colorization techniques by providing natural-looking arts with only minor coloring mistakes.

Defending Against Adversarial Attacks by Suppressing the Largest Eigenvalue of Fisher Information Matrix

The proposed scheme for defending against adversarial attacks by suppressing the largest eigenvalue of the Fisher information matrix (FIM) has an effective and robust defensive capability, as it decreases the fooling ratio of the generated adversarial examples, and remains the classification accuracy of the original network.

Learning from Label Proportions with Generative Adversarial Networks

In this paper, we leverage generative adversarial networks (GANs) to derive an effective algorithm LLP-GAN for learning from label proportions (LLP), where only the bag-level proportional information

References

SHOWING 1-10 OF 29 REFERENCES

Stochastic Backpropagation and Approximate Inference in Deep Generative Models

We marry ideas from deep neural networks and approximate Bayesian inference to derive a generalised class of deep, directed generative models, endowed with a new algorithm for scalable inference and

Going deeper with convolutions

We propose a deep convolutional neural network architecture codenamed Inception that achieves the new state of the art for classification and detection in the ImageNet Large-Scale Visual Recognition

Distillation as a Defense to Adversarial Perturbations Against Deep Neural Networks

The study shows that defensive distillation can reduce effectiveness of sample creation from 95% to less than 0.5% on a studied DNN, and analytically investigates the generalizability and robustness properties granted by the use of defensive Distillation when training DNNs.

Autoencoding beyond pixels using a learned similarity metric

An autoencoder that leverages learned representations to better measure similarities in data space is presented and it is shown that the method learns an embedding in which high-level abstract visual features (e.g. wearing glasses) can be modified using simple arithmetic.

Distributional Smoothing with Virtual Adversarial Training

When the LDS based regularization was applied to supervised and semi-supervised learning for the MNIST dataset, it outperformed all the training methods other than the current state of the art method, which is based on a highly advanced generative model.

Rectifier Nonlinearities Improve Neural Network Acoustic Models

This work explores the use of deep rectifier networks as acoustic models for the 300 hour Switchboard conversational speech recognition task, and analyzes hidden layer representations to quantify differences in how ReL units encode inputs as compared to sigmoidal units.

Dropout: a simple way to prevent neural networks from overfitting

It is shown that dropout improves the performance of neural networks on supervised learning tasks in vision, speech recognition, document classification and computational biology, obtaining state-of-the-art results on many benchmark data sets.

Intriguing properties of neural networks

It is found that there is no distinction between individual highlevel units and random linear combinations of high level units, according to various methods of unit analysis, and it is suggested that it is the space, rather than the individual units, that contains of the semantic information in the high layers of neural networks.

Efficient Estimation of Word Representations in Vector Space

Two novel model architectures for computing continuous vector representations of words from very large data sets are proposed and it is shown that these vectors provide state-of-the-art performance on the authors' test set for measuring syntactic and semantic word similarities.

The Psychology of Visual Illusion