Explainability and Adversarial Robustness for RNNs

  title={Explainability and Adversarial Robustness for RNNs},
  author={Alexander Hartl and Maximilian Bachl and J. Fabini and T. Zseby},
  journal={2020 IEEE Sixth International Conference on Big Data Computing Service and Applications (BigDataService)},
  • Alexander Hartl, Maximilian Bachl, +1 author T. Zseby
  • Published 2020
  • Computer Science, Mathematics
  • 2020 IEEE Sixth International Conference on Big Data Computing Service and Applications (BigDataService)
Recurrent Neural Networks (RNNs) yield attractive properties for constructing Intrusion Detection Systems (IDSs) for network data. With the rise of ubiquitous Machine Learning (ML) systems, malicious actors have been catching up quickly to find new ways to exploit ML vulnerabilities for profit. Recently developed adversarial ML techniques focus on computer vision and their applicability to network traffic is not straightforward: Network packets expose fewer features than an image, are… Expand
8 Citations

Figures and Tables from this paper

SparseIDS: Learning Packet Sampling with Reinforcement Learning
  • 2
  • PDF
Anomaly Detection for Mixed Packet Sequences
A flow-based IDS using Machine Learning in eBPF
  • PDF
Gold Price Prediction and Modelling using Deep Learning Techniques
  • Vidya G S, Hari V S
  • 2020 IEEE Recent Advances in Intelligent Computational Systems (RAICS)
  • 2020
  • Highly Influenced


Crafting adversarial input sequences for recurrent neural networks
  • 227
  • PDF
Towards Evaluating the Robustness of Neural Networks
  • 3,169
  • PDF
Towards Deep Learning Models Resistant to Adversarial Attacks
  • 3,093
  • PDF
Explaining and Harnessing Adversarial Examples
  • 6,767
  • Highly Influential
  • PDF
Analysis of Lightweight Feature Vectors for Attack Detection in Network Traffic
  • 9
  • PDF
Walling up Backdoors in Intrusion Detection Systems
  • 5
  • PDF
Toward Generating a New Intrusion Detection Dataset and Intrusion Traffic Characterization
  • 559
  • Highly Influential
  • PDF
Intriguing properties of neural networks
  • 6,268
  • PDF
Dropout: a simple way to prevent neural networks from overfitting
  • 21,778
  • PDF