Z. Berkay Celik

Learn More
—Deep learning takes advantage of large datasets and computationally efficient training algorithms to outperform other approaches at various machine learning tasks. However, imperfections in the training phase of deep neural networks make them vulnerable to adversarial samples: inputs crafted by adversaries with the intent of causing deep neural networks to(More)
—Machine learning (ML) models, e.g., state-of-the-art deep neural networks (DNNs), are vulnerable to adversarial examples: malicious inputs modified to yield erroneous model outputs, while appearing unmodified to human observers. Potential attacks include having malicious content like malware identified as legitimate or controlling vehicle behavior. Yet,(More)
We consider the problem of using flow-level data for detection of botnet command and control (C&C) activity. We find that current approaches do not consider timing-based calibration of the C&C traffic traces prior to using this traffic to salt a background traffic trace. Thus, timing-based features of the C&C traffic may be artificially distinctive,(More)
Deep Learning is increasingly used in several machine learning tasks as Deep Neural Networks (DNNs) frequently outperform other techniques. Yet, previous work showed that, once deployed, DNNs are vulnerable to integrity attacks. Indeed, adversaries can control DNN outputs and for instance force them to misclassify inputs by adding a carefully crafted and(More)
Modern detection systems use sensor outputs available in the deployment environment to probabilistically identify attacks. These systems are trained on past or synthetic feature vectors to create a model of anomalous or normal behavior. Thereafter, run-time collected sensor outputs are compared to the model to identify attacks (or the lack of attack). While(More)
  • 1