Code-Bridged Classifier (CBC): A Low or Negative Overhead Defense for Making a CNN Classifier Robust Against Adversarial Attacks

  title={Code-Bridged Classifier (CBC): A Low or Negative Overhead Defense for Making a CNN Classifier Robust Against Adversarial Attacks},
  author={F. Behnia and Ali Mirzaeian and M. Sabokrou and S. Manoj and T. Mohsenin and Khaled N. Khasawneh and Liang Zhao and Houman Homayoun and Avesta Sasan},
  journal={2020 21st International Symposium on Quality Electronic Design (ISQED)},
  • F. Behnia, Ali Mirzaeian, +6 authors Avesta Sasan
  • Published 2020
  • Computer Science, Mathematics
  • 2020 21st International Symposium on Quality Electronic Design (ISQED)
  • In this paper, we propose Code-Bridged Classifier (CBC), a framework for making a Convolutional Neural Network (CNNs) robust against adversarial attacks without increasing or even by decreasing the overall models' computational complexity. More specifically, we propose a stacked encoder-convolutional model, in which the input image is first encoded by the encoder module of a denoising auto-encoder, and then the resulting latent representation (without being decoded) is fed to a reduced… CONTINUE READING
    3 Citations

    Figures, Tables, and Topics from this paper.

    Learning Diverse Latent Representations for Improving the Resilience to Adversarial Attacks
    • 1
    • PDF
    CSCMAC - Cyclic Sparsely Connected Neural Network Manycore Accelerator
    Using Transfer Learning Approach to Implement Convolutional Neural Network model to Recommend Airline Tickets by Using Online Reviews
    • Maryam Heidari, S. Rafatirad
    • Computer Science
    • 2020 15th International Workshop on Semantic and Social Media Adaptation and Personalization (SMA
    • 2020


    DeepFool: A Simple and Accurate Method to Fool Deep Neural Networks
    • 1,825
    • Highly Influential
    • PDF
    Universal Adversarial Perturbations
    • 973
    • PDF
    Towards Robust Detection of Adversarial Examples
    • 68
    • PDF
    A Comparative Study of Autoencoders against Adversarial Attacks
    • 5
    • PDF
    Improving the Adversarial Robustness and Interpretability of Deep Neural Networks by Regularizing their Input Gradients
    • 215
    • PDF
    Going deeper with convolutions
    • 20,857
    • PDF
    MagNet: A Two-Pronged Defense against Adversarial Examples
    • 509
    • Highly Influential
    • PDF
    Towards Evaluating the Robustness of Neural Networks
    • 2,723
    • Highly Influential
    • PDF
    Explaining and Harnessing Adversarial Examples
    • 5,788
    • Highly Influential
    • PDF
    Boosting Adversarial Attacks with Momentum
    • Y. Dong, Fangzhou Liao, +4 authors J. Li
    • Computer Science, Mathematics
    • 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition
    • 2018
    • 540
    • Highly Influential
    • PDF