Corpus ID: 218763378

Robust Ensemble Model Training via Random Layer Sampling Against Adversarial Attack

@article{Lee2020RobustEM,
  title={Robust Ensemble Model Training via Random Layer Sampling Against Adversarial Attack},
  author={Hakmin Lee and Hong Joo Lee and S. T. Kim and Yong Man Ro},
  journal={ArXiv},
  year={2020},
  volume={abs/2005.10757}
}
Deep neural networks have achieved substantial achievements in several computer vision areas, but have vulnerabilities that are often fooled by adversarial examples that are not recognized by humans. This is an important issue for security or medical applications. In this paper, we propose an ensemble model training framework with random layer sampling to improve the robustness of deep neural networks. In the proposed training framework, we generate various sampled model through the random… Expand
Adversarially Robust Hyperspectral Image Classification via Random Spectral Sampling and Spectral Shape Encoding
Smooth Adversarial Training

References

SHOWING 1-10 OF 32 REFERENCES
Towards Robust Neural Networks via Random Self-ensemble
Stochastic Activation Pruning for Robust Adversarial Defense
Improving Adversarial Robustness via Promoting Ensemble Diversity
Improving Adversarial Robustness Requires Revisiting Misclassified Examples
Training robust models using Random Projection
EAD: Elastic-Net Attacks to Deep Neural Networks via Adversarial Examples
Distillation as a Defense to Adversarial Perturbations Against Deep Neural Networks
Adversarial Noise Layer: Regularize Neural Network by Adding Noise
Towards Deep Learning Models Resistant to Adversarial Attacks
Towards Evaluating the Robustness of Neural Networks
...
1
2
3
4
...