# Robust Collective Classification against Structural Attacks

@inproceedings{Zhou2020RobustCC, title={Robust Collective Classification against Structural Attacks}, author={Kai Zhou and Yevgeniy Vorobeychik}, booktitle={UAI}, year={2020} }

Collective learning methods exploit relations among data points to enhance classification performance. However, such relations, represented as edges in the underlying graphical model, expose an extra attack surface to the adversaries. We study adversarial robustness of an important class of such graphical models, Associative Markov Networks (AMN), to structural attacks, where an attacker can modify the graph structure at test time. We formulate the task of learning a robust AMN classifier as a…

## 6 Citations

GRAPH NEURAL NETWORKS

- Computer Science
- 2021

This work proposes the first collective robustness certificate which computes the number of predictions that are simultaneously guaranteed to remain stable under perturbation, i.e. cannot be attacked.

Robust Network Alignment via Attack Signal Scaling and Adversarial Perturbation Elimination

- Computer ScienceWWW
- 2021

This paper develops an adversarial perturbation elimination (APE) model to neutralize adversarial nodes in vulnerable space to adversarial-free nodes in safe area, by integrating Dirac delta approximation (DDA) techniques and the LSTM models.

BinarizedAttack: Structural Poisoning Attacks to Graph-based Anomaly Detection

- Computer ScienceArXiv
- 2021

This paper designs a new type of targeted structural poisoning attacks to a representative regression-based GAD system termed OddBall, and proposes a novel attack method termed BinarizedAttack based on gradient descent, which is very effective in enabling target nodes to evade graph-based anomaly detection tools with limited attacker’s budget.

Structural Attack against Graph Based Android Malware Detection

- Computer ScienceCCS
- 2021

This paper proposes the first structural attack against graph-based Android malware detection techniques, which addresses the inverse-transformation problem between feature-space attacks and problem space attacks, and designs a Heuristic optimization model integrated with Reinforcement learning framework to optimize this structural ATtack.

Integrated Defense for Resilient Graph Matching

- Computer ScienceICML
- 2021

An integrated defense model, IDRGM, is proposed for resilient graph matching with two novel defense techniques to defend against the above two attacks simultaneously and the robustness of the model is evaluated on real datasets against state-of-the-art algorithms.

Expressive 1-Lipschitz Neural Networks for Robust Multiple Graph Learning against Adversarial Attacks

- Computer Science, MathematicsICML
- 2021

(Scalar case.) According to Definition 1 and Lemma 1, given large enough M , the domain (−∞,+∞) can be partitioned into M subintervals P = { [x0, x1], [x1, x2], · · ·, [xm−1, xm], · · ·, [xM−1, xM ]…

## References

SHOWING 1-10 OF 32 REFERENCES

Adversarial Attack on Graph Structured Data

- Computer ScienceICML
- 2018

This paper proposes a reinforcement learning based attack method that learns the generalizable attack policy, while only requiring prediction labels from the target classifier, and uses both synthetic and real-world data to show that a family of Graph Neural Network models are vulnerable to adversarial attacks.

Adversarial Attacks on Neural Networks for Graph Data

- Computer ScienceKDD
- 2018

This work introduces the first study of adversarial attacks on attributed graphs, specifically focusing on models exploiting ideas of graph convolutions, and generates adversarial perturbations targeting the node's features and the graph structure, taking the dependencies between instances in account.

Adversarial Attacks on Graph Neural Networks via Meta Learning

- Computer Science
- 2019

The core principle is to use meta-gradients to solve the bilevel problem underlying training-time attacks on graph neural networks for node classification that perturb the discrete graph structure, essentially treating the graph as a hyperparameter to optimize.

Evasion-Robust Classification on Binary Domains

- Computer ScienceACM Trans. Knowl. Discov. Data
- 2018

This approach is the first to compute an optimal solution to adversarial loss minimization for two general classes of adversarial evasion models in the context of binary feature spaces and is robust to misspecifications of the adversarial model.

Feature Cross-Substitution in Adversarial Classification

- Computer ScienceNIPS
- 2014

This work investigates both the problem of modeling the objectives of adversaries, as well as the algorithmic problem of accounting for rational, objective-driven adversaries, and presents the first method for combining an adversarial classification algorithm with a very general class of models of adversarial classifier evasion.

Towards Deep Learning Models Resistant to Adversarial Attacks

- Computer ScienceICLR
- 2018

This work studies the adversarial robustness of neural networks through the lens of robust optimization, and suggests the notion of security against a first-order adversary as a natural and broad security guarantee.

Topology Attack and Defense for Graph Neural Networks: An Optimization Perspective

- Computer ScienceIJCAI
- 2019

A novel gradient-based attack method is presented that facilitates the difficulty of tackling discrete graph data and yields higher robustness against both different gradient based and greedy attack methods without sacrificing classification accuracy on original graph.

Convex Adversarial Collective Classification

- Computer Science, MathematicsICML
- 2013

A novel method for robustly performing collective classification in the presence of a malicious adversary that can modify up to a fixed number of binary-valued attributes that consistently outperforms both nonadversarial and non-relational baselines.

Explaining and Harnessing Adversarial Examples

- Computer ScienceICLR
- 2015

It is argued that the primary cause of neural networks' vulnerability to adversarial perturbation is their linear nature, supported by new quantitative results while giving the first explanation of the most intriguing fact about them: their generalization across architectures and training sets.

Robust Physical-World Attacks on Deep Learning Visual Classification

- Computer Science2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition
- 2018

This work proposes a general attack algorithm, Robust Physical Perturbations (RP2), to generate robust visual adversarial perturbations under different physical conditions and shows that adversarial examples generated using RP2 achieve high targeted misclassification rates against standard-architecture road sign classifiers in the physical world under various environmental conditions, including viewpoints.