• Corpus ID: 220794051

Robust Collective Classification against Structural Attacks

@inproceedings{Zhou2020RobustCC,
  title={Robust Collective Classification against Structural Attacks},
  author={Kai Zhou and Yevgeniy Vorobeychik},
  booktitle={UAI},
  year={2020}
}
Collective learning methods exploit relations among data points to enhance classification performance. However, such relations, represented as edges in the underlying graphical model, expose an extra attack surface to the adversaries. We study adversarial robustness of an important class of such graphical models, Associative Markov Networks (AMN), to structural attacks, where an attacker can modify the graph structure at test time. We formulate the task of learning a robust AMN classifier as a… 

Figures from this paper

GRAPH NEURAL NETWORKS
TLDR
This work proposes the first collective robustness certificate which computes the number of predictions that are simultaneously guaranteed to remain stable under perturbation, i.e. cannot be attacked.
Robust Network Alignment via Attack Signal Scaling and Adversarial Perturbation Elimination
TLDR
This paper develops an adversarial perturbation elimination (APE) model to neutralize adversarial nodes in vulnerable space to adversarial-free nodes in safe area, by integrating Dirac delta approximation (DDA) techniques and the LSTM models.
BinarizedAttack: Structural Poisoning Attacks to Graph-based Anomaly Detection
TLDR
This paper designs a new type of targeted structural poisoning attacks to a representative regression-based GAD system termed OddBall, and proposes a novel attack method termed BinarizedAttack based on gradient descent, which is very effective in enabling target nodes to evade graph-based anomaly detection tools with limited attacker’s budget.
Structural Attack against Graph Based Android Malware Detection
TLDR
This paper proposes the first structural attack against graph-based Android malware detection techniques, which addresses the inverse-transformation problem between feature-space attacks and problem space attacks, and designs a Heuristic optimization model integrated with Reinforcement learning framework to optimize this structural ATtack.
Integrated Defense for Resilient Graph Matching
TLDR
An integrated defense model, IDRGM, is proposed for resilient graph matching with two novel defense techniques to defend against the above two attacks simultaneously and the robustness of the model is evaluated on real datasets against state-of-the-art algorithms.
Expressive 1-Lipschitz Neural Networks for Robust Multiple Graph Learning against Adversarial Attacks
(Scalar case.) According to Definition 1 and Lemma 1, given large enough M , the domain (−∞,+∞) can be partitioned into M subintervals P = { [x0, x1], [x1, x2], · · ·, [xm−1, xm], · · ·, [xM−1, xM ]

References

SHOWING 1-10 OF 32 REFERENCES
Adversarial Attack on Graph Structured Data
TLDR
This paper proposes a reinforcement learning based attack method that learns the generalizable attack policy, while only requiring prediction labels from the target classifier, and uses both synthetic and real-world data to show that a family of Graph Neural Network models are vulnerable to adversarial attacks.
Adversarial Attacks on Neural Networks for Graph Data
TLDR
This work introduces the first study of adversarial attacks on attributed graphs, specifically focusing on models exploiting ideas of graph convolutions, and generates adversarial perturbations targeting the node's features and the graph structure, taking the dependencies between instances in account.
Adversarial Attacks on Graph Neural Networks via Meta Learning
TLDR
The core principle is to use meta-gradients to solve the bilevel problem underlying training-time attacks on graph neural networks for node classification that perturb the discrete graph structure, essentially treating the graph as a hyperparameter to optimize.
Evasion-Robust Classification on Binary Domains
TLDR
This approach is the first to compute an optimal solution to adversarial loss minimization for two general classes of adversarial evasion models in the context of binary feature spaces and is robust to misspecifications of the adversarial model.
Feature Cross-Substitution in Adversarial Classification
TLDR
This work investigates both the problem of modeling the objectives of adversaries, as well as the algorithmic problem of accounting for rational, objective-driven adversaries, and presents the first method for combining an adversarial classification algorithm with a very general class of models of adversarial classifier evasion.
Towards Deep Learning Models Resistant to Adversarial Attacks
TLDR
This work studies the adversarial robustness of neural networks through the lens of robust optimization, and suggests the notion of security against a first-order adversary as a natural and broad security guarantee.
Topology Attack and Defense for Graph Neural Networks: An Optimization Perspective
TLDR
A novel gradient-based attack method is presented that facilitates the difficulty of tackling discrete graph data and yields higher robustness against both different gradient based and greedy attack methods without sacrificing classification accuracy on original graph.
Convex Adversarial Collective Classification
TLDR
A novel method for robustly performing collective classification in the presence of a malicious adversary that can modify up to a fixed number of binary-valued attributes that consistently outperforms both nonadversarial and non-relational baselines.
Explaining and Harnessing Adversarial Examples
TLDR
It is argued that the primary cause of neural networks' vulnerability to adversarial perturbation is their linear nature, supported by new quantitative results while giving the first explanation of the most intriguing fact about them: their generalization across architectures and training sets.
Robust Physical-World Attacks on Deep Learning Visual Classification
TLDR
This work proposes a general attack algorithm, Robust Physical Perturbations (RP2), to generate robust visual adversarial perturbations under different physical conditions and shows that adversarial examples generated using RP2 achieve high targeted misclassification rates against standard-architecture road sign classifiers in the physical world under various environmental conditions, including viewpoints.
...
1
2
3
4
...