Corpus ID: 202888963

GraphMix: Regularized Training of Graph Neural Networks for Semi-Supervised Learning

@article{Verma2019GraphMixRT,
  title={GraphMix: Regularized Training of Graph Neural Networks for Semi-Supervised Learning},
  author={Vikas Verma and Meng Qu and Alex Lamb and Yoshua Bengio and Juho Kannala and Jian Tang},
  journal={ArXiv},
  year={2019},
  volume={abs/1909.11715}
}
We present GraphMix, a regularization technique for Graph Neural Network based semi-supervised object classification, leveraging the recent advances in the regularization of classical deep neural networks. Specifically, we propose a unified approach in which we train a fully-connected network jointly with the graph neural network via parameter sharing, interpolation-based regularization, and self-predicted-targets. Our proposed method is architecture agnostic in the sense that it can be applied… Expand
Graph Random Neural Network
TLDR
This work proposes the consistency regularization for Grand by leveraging the distributional consistency of unlabeled nodes in multiple augmentations, improving the generalization capacity of the model. Expand
Graph Symbiosis Learning
TLDR
A novel adaptive exchange method to iteratively substitute redundant channels in the weight matrix of one GNN with informative channels of another GNN in a layer-by-layer manner is proposed. Expand
Effective Training Strategies for Deep Graph Neural Networks
TLDR
The proposed NodeNorm regularizes deep GCNs by discouraging feature-wise correlation of hidden embeddings and increasing model smoothness with respect to input node features, and thus effectively reduces overfitting, enabling deep GNNs to compete with and even outperform shallow ones. Expand
Bag of Tricks for Training Deeper Graph Neural Networks: A Comprehensive Benchmark Study
  • Tianlong Chen, Kaixiong Zhou, +4 authors Zhangyang Wang
  • Computer Science
  • ArXiv
  • 2021
TLDR
The first fair and reproducible benchmark dedicated to assessing the “tricks" of training deep GNNs is presented and it is demonstrated that an organic combo of initial connection, identity mapping, group and batch normalization has the most ideal performance on large datasets. Expand
NodeAug: Semi-Supervised Node Classification with Data Augmentation
TLDR
The NodeAug (Node-Parallel Augmentation) scheme, that creates a 'parallel universe' for each node to conduct DA, to block the undesired effects from other nodes, yields significant gains for strong GCN models on the Cora, Citeseer, Pubmed, and two co-authorship networks, with a more efficient training process thanks to the proposed subgraph mini-batch training approach. Expand
GRAPHSAD: LEARNING GRAPH REPRESENTATIONS
Graph Neural Networks (GNNs) learn effective node/graph representations by aggregating the attributes of neighboring nodes, which commonly derives a single representation mixing the information ofExpand
Network representation learning: A macro and micro view
Graph is a universe data structure that is widely used to organize data in real-world. Various real-word networks like the transportation network, social and academic network can be represented byExpand
Distance-wise Graph Contrastive Learning
TLDR
The Distance-wise Graph Contrastive Learning (DwGCL) method, which proposes to apply CL in the graph learning adaptively by taking the received task information of each node into consideration, and can bring a clear improvement over previous GCL methods. Expand
Understanding and Resolving Performance Degradation in Graph Convolutional Networks
  • Kuangqi Zhou, Yanfei Dong, +4 authors Jiashi Feng
  • Computer Science, Mathematics
  • 2020
TLDR
A variance-controlling technique termed Node Normalization (NodeNorm), which scales each node’s features using its own standard deviation, enables deep GCNs to outperform shallow ones in cases where deep models are needed, and to achieve comparable results with shallow ones on 6 benchmark datasets. Expand
Uncertainty-Matching Graph Neural Networks to Defend Against Poisoning Attacks
TLDR
This work proposes to build a surrogate predictor that does not directly access the graph structure, but systematically extracts reliable knowledge from a standard GNN through a novel uncertainty-matching strategy, which makes UM-GNN immune to evasion attacks by design, and achieves significantly improved robustness against poisoning attacks. Expand
...
1
2
3
4
...

References

SHOWING 1-10 OF 58 REFERENCES
Deeper Insights into Graph Convolutional Networks for Semi-Supervised Learning
TLDR
It is shown that the graph convolution of the GCN model is actually a special form of Laplacian smoothing, which is the key reason why GCNs work, but it also brings potential concerns of over-smoothing with many convolutional layers. Expand
Graph Attention Networks
We present graph attention networks (GATs), novel neural network architectures that operate on graph-structured data, leveraging masked self-attentional layers to address the shortcomings of priorExpand
Deep Convolutional Networks on Graph-Structured Data
TLDR
This paper develops an extension of Spectral Networks which incorporates a Graph Estimation procedure, that is test on large-scale classification problems, matching or improving over Dropout Networks with far less parameters to estimate. Expand
Pitfalls of Graph Neural Network Evaluation
TLDR
This paper performs a thorough empirical evaluation of four prominent GNN models and suggests that simpler GNN architectures are able to outperform the more sophisticated ones if the hyperparameters and the training procedure are tuned fairly for all models. Expand
Deep Graph Infomax
TLDR
Deep Graph Infomax (DGI) is presented, a general approach for learning node representations within graph-structured data in an unsupervised manner that is readily applicable to both transductive and inductive learning setups. Expand
Semi-Supervised Classification with Graph Convolutional Networks
TLDR
A scalable approach for semi-supervised learning on graph-structured data that is based on an efficient variant of convolutional neural networks which operate directly on graphs which outperforms related methods by a significant margin. Expand
Variational Graph Auto-Encoders
TLDR
The variational graph auto-encoder (VGAE) is introduced, a framework for unsupervised learning on graph-structured data based on the variational auto- Encoder (VAE) that can naturally incorporate node features, which significantly improves predictive performance on a number of benchmark datasets. Expand
Graph Neural Networks: A Review of Methods and Applications
TLDR
A detailed review over existing graph neural network models is provided, systematically categorize the applications, and four open problems for future research are proposed. Expand
Geometric Deep Learning on Graphs and Manifolds Using Mixture Model CNNs
TLDR
This paper proposes a unified framework allowing to generalize CNN architectures to non-Euclidean domains (graphs and manifolds) and learn local, stationary, and compositional task-specific features and test the proposed method on standard tasks from the realms of image-, graph-and 3D shape analysis and show that it consistently outperforms previous approaches. Expand
Batch Virtual Adversarial Training for Graph Convolutional Networks
TLDR
Two algorithms are proposed, sample-based and optimization-based BVAT, which are suitable to promote the smoothness of the model for graph-structured data by either finding virtual adversarial perturbations for a subset of nodes far from each other or generating virtual adversaries for all nodes with an optimization process. Expand
...
1
2
3
4
5
...