Improved Aggregating and Accelerating Training Methods for Spatial Graph Neural Networks on Fraud Detection
@article{Zeng2022ImprovedAA, title={Improved Aggregating and Accelerating Training Methods for Spatial Graph Neural Networks on Fraud Detection}, author={Yufan Zeng and Jiashan Tang}, journal={ArXiv}, year={2022}, volume={abs/2202.06580} }
Graph neural networks (GNNs) have been widely applied to numerous fields. A recent work which combines layered structure and residual connection proposes an improved deep architecture to extend CAmouflage-REsistant GNN (CARE-GNN) to deep models named as Residual Layered CARE-GNN (RLC-GNN), which forms self-correcting and incremental learning mechanism, and achieves significant performance improvements on fraud detection task. However, we spot three issues of RLC-GNN, which are the usage of…
References
SHOWING 1-10 OF 21 REFERENCES
RLC-GNN: An Improved Deep Architecture for Spatial-Based Graph Neural Network with Application to Fraud Detection
- Computer ScienceApplied Sciences
- 2021
This paper considers a multi-layer architecture which can form a complementary relationship with residual structure which achieves state-of-the-art results on fraud detection tasks and proposes an improved algorithm named Residual Layered CARE-GNN (RLC- GNN).
Enhancing Graph Neural Network-based Fraud Detectors against Camouflaged Fraudsters
- Computer ScienceCIKM
- 2020
This paper introduces two types of camouflages based on recent empirical studies, i.e., the feature camouflage and the relation camouflage and proposes a new model named CAmouflage-REsistant GNN (CARE-GNN), to enhance the GNN aggregation process with three unique modules against camouflages.
Learning long-term dependencies using layered graph neural networks
- Computer ScienceThe 2010 International Joint Conference on Neural Networks (IJCNN)
- 2010
This paper presents a new architecture, called Layered GNN (LGNN), realized by a cascade of GNNs: each layer is fed with the original data and with the state information calculated by the previous layer in the cascade, which allows each GNN to solve a subproblem.
Effective Training Strategies for Deep Graph Neural Networks
- Computer ScienceArXiv
- 2020
The proposed NodeNorm regularizes deep GCNs by discouraging feature-wise correlation of hidden embeddings and increasing model smoothness with respect to input node features, and thus effectively reduces overfitting, enabling deep GNNs to compete with and even outperform shallow ones.
Graph Convolutional Neural Networks for Web-Scale Recommender Systems
- Computer ScienceKDD
- 2018
A novel method based on highly efficient random walks to structure the convolutions and a novel training strategy that relies on harder-and-harder training examples to improve robustness and convergence of the model are developed.
Towards Deeper Graph Neural Networks
- Computer ScienceKDD
- 2020
This work provides a systematical analysis and theoretical analysis of the over-smoothing issue and proposes Deep Adaptive Graph Neural Network (DAGNN) to adaptively incorporate information from large receptive fields to learn graph node representations from larger receptive fields.
Inductive Representation Learning on Large Graphs
- Computer ScienceNIPS
- 2017
GraphSAGE is presented, a general, inductive framework that leverages node feature information (e.g., text attributes) to efficiently generate node embeddings for previously unseen data and outperforms strong baselines on three inductive node-classification benchmarks.
Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift
- Computer ScienceICML
- 2015
Applied to a state-of-the-art image classification model, Batch Normalization achieves the same accuracy with 14 times fewer training steps, and beats the original model by a significant margin.