• Publications
  • Influence
The Limitations of Deep Learning in Adversarial Settings
TLDR
This work formalizes the space of adversaries against deep neural networks (DNNs) and introduces a novel class of algorithms to craft adversarial samples based on a precise understanding of the mapping between inputs and outputs of DNNs. Expand
Practical Black-Box Attacks against Machine Learning
TLDR
This work introduces the first practical demonstration of an attacker controlling a remotely hosted DNN with no such knowledge, and finds that this black-box attack strategy is capable of evading defense strategies previously found to make adversarial example crafting harder. Expand
Distillation as a Defense to Adversarial Perturbations Against Deep Neural Networks
TLDR
The study shows that defensive distillation can reduce effectiveness of sample creation from 95% to less than 0.5% on a studied DNN, and analytically investigates the generalizability and robustness properties granted by the use of defensive Distillation when training DNNs. Expand
metapath2vec: Scalable Representation Learning for Heterogeneous Networks
TLDR
Two scalable representation learning models, namely metapath2vec and metapATH2vec++, are developed that are able to not only outperform state-of-the-art embedding models in various heterogeneous network mining tasks, but also discern the structural and semantic correlations between diverse network objects. Expand
Decentralized cognitive MAC for opportunistic spectrum access in ad hoc networks: A POMDP framework
TLDR
An analytical framework for opportunistic spectrum access based on the theory of partially observable Markov decision process (POMDP) is developed and cognitive MAC protocols that optimize the performance of secondary users while limiting the interference perceived by primary users are proposed. Expand
Hierarchical digital modulation classification using cumulants
TLDR
It is shown that cumulant-based classification is particularly effective when used in a hierarchical scheme, enabling separation into subclasses at low signal-to-noise ratio with small sample size. Expand
Distributed Algorithms for Learning and Cognitive Medium Access with Logarithmic Regret
TLDR
This work proposes policies for distributed learning and access which achieve order-optimal cognitive system throughput under self play, i.e., when implemented at all the secondary users, and proposes a policy whose sum regret grows only slightly faster than logarithmic in the number of transmission slots. Expand
Practical Black-Box Attacks against Deep Learning Systems using Adversarial Examples
TLDR
This work introduces the first practical demonstration that cross-model transfer phenomenon enables attackers to control a remotely hosted DNN with no access to the model, its parameters, or its training data, and introduces the attack strategy of fitting a substitute model to the input-output pairs in this manner, then crafting adversarial examples based on this auxiliary model. Expand
Crafting adversarial input sequences for recurrent neural networks
TLDR
This paper investigates adversarial input sequences for recurrent neural networks processing sequential data and shows that the classes of algorithms introduced previously to craft adversarial samples misclassified by feed-forward neural networks can be adapted to recurrent Neural networks. Expand
Heterogeneous Graph Neural Network
TLDR
HetGNN, a heterogeneous graph neural network model, is proposed that can outperform state-of-the-art baselines in various graph mining tasks, i.e., link prediction, recommendation, node classification and clustering and inductive node classification & clustering. Expand
...
1
2
3
4
5
...