Corpus ID: 11118105

Deeper Insights into Graph Convolutional Networks for Semi-Supervised Learning

@article{Li2018DeeperII,
  title={Deeper Insights into Graph Convolutional Networks for Semi-Supervised Learning},
  author={Qimai Li and Zhichao Han and Xiao-Ming Wu},
  journal={ArXiv},
  year={2018},
  volume={abs/1801.07606}
}
Many interesting problems in machine learning are being revisited with new deep learning tools. [...] Key Method First, we show that the graph convolution of the GCN model is actually a special form of Laplacian smoothing, which is the key reason why GCNs work, but it also brings potential concerns of over-smoothing with many convolutional layers. Second, to overcome the limits of the GCN model with shallow architectures, we propose both co-training and self-training approaches to train GCNs. Our approaches…Expand
Simple and Deep Graph Convolutional Networks
TLDR
The GCNII is proposed, an extension of the vanilla GCN model with two simple yet effective techniques: {\em Initial residual} and {\em Identity mapping} that effectively relieves the problem of over-smoothing. Expand
New Insights into Graph Convolutional Networks using Neural Tangent Kernels
TLDR
This paper derives NTKs corresponding to infinitely wide GCNs (with and without skip connections), and proposes NTK as an efficient ‘surrogate model’ for GCNs that does not suffer from performance fluctuations due to hyper-parameter tuning since it is a hyper- parameter free deterministic kernel. Expand
Revisiting Graph Convolutional Network on Semi-Supervised Node Classification from an Optimization Perspective
TLDR
A universal theoretical framework of GCN is established from an optimization perspective and a novel convolutional kernel named GCN+ is derived which has lower parameter amount while relieving the over-smoothing inherently. Expand
Multi-Stage Self-Supervised Learning for Graph Convolutional Networks
TLDR
A novel training algorithm for Graph Convolutional Network, called Multi-Stage Self-Supervised(M3S) Training Algorithm, combined with self-supervised learning approach, focusing on improving the generalization performance of GCNs on graphs with few labeled nodes. Expand
Rank-based self-training for graph convolutional networks
TLDR
This paper proposes a novel self-training approach through a rank-based model for improving the accuracy of GCNs on semi-supervised classification tasks, and proposes a rank aggregation of labeled sets obtained by different GCN models. Expand
Semi-supervised learning with mixed-order graph convolutional networks
  • Jie Wang, Jianqing Liang, Junbiao Cui, Jiye Liang
  • Computer Science
  • Inf. Sci.
  • 2021
TLDR
A novel end-to-end ensemble framework, which is named mixed-order graph convolutional networks (MOGCN), which employs a novel ensemble module, in which the pseudo-labels of unlabeled nodes from various GCN learners are used to augment the diversity among the learners. Expand
Graph Convolutional Networks Meet with High Dimensionality Reduction
TLDR
This paper proposes to utilize a dimensionality reduction technique conjugate with personalized page rank so that the GCNs can both take advantage from graph topology and resolve the hub node favouring problem for GCNs. Expand
Every Node Counts: Self-Ensembling Graph Convolutional Networks for Semi-Supervised Learning
TLDR
This work proposes a novel framework named Self-Ensembling GCN (SEGCN), which marries GCN with Mean Teacher - another powerful model in semi-supervised learning, which contains a student model and a teacher model. Expand
The Truly Deep Graph Convolutional Networks for Node Classification
TLDR
DropEdge is proposed, a novel technique that randomly removes a certain number of edges from the input graphs, acting like a data augmenter and also a message passing reducer, and enables us to recast a wider range of Convolutional Neural Networks from the image field to the graph domain. Expand
GraphMix: Regularized Training of Graph Neural Networks for Semi-Supervised Learning
TLDR
This work proposes a unified approach in which a fully-connected network is trained jointly with the graph neural network via parameter sharing, interpolation-based regularization, and self-predicted-targets. Expand
...
1
2
3
4
5
...

References

SHOWING 1-10 OF 42 REFERENCES
Semi-Supervised Classification with Graph Convolutional Networks
TLDR
A scalable approach for semi-supervised learning on graph-structured data that is based on an efficient variant of convolutional neural networks which operate directly on graphs which outperforms related methods by a significant margin. Expand
Revisiting Semi-Supervised Learning with Graph Embeddings
TLDR
On a large and diverse set of benchmark tasks, including text classification, distantly supervised entity extraction, and entity classification, the proposed semi-supervised learning framework shows improved performance over many of the existing models. Expand
Convolutional Neural Networks on Graphs with Fast Localized Spectral Filtering
TLDR
This work presents a formulation of CNNs in the context of spectral graph theory, which provides the necessary mathematical background and efficient numerical schemes to design fast localized convolutional filters on graphs. Expand
Spectral Networks and Locally Connected Networks on Graphs
TLDR
This paper considers possible generalizations of CNNs to signals defined on more general domains without the action of a translation group, and proposes two constructions, one based upon a hierarchical clustering of the domain, and another based on the spectrum of the graph Laplacian. Expand
DeepWalk: online learning of social representations
TLDR
DeepWalk is an online learning algorithm which builds useful incremental results, and is trivially parallelizable, which make it suitable for a broad class of real world applications such as network classification, and anomaly detection. Expand
Semi-supervised learning using randomized mincuts
TLDR
The experiments on several datasets show that when the structure of the graph supports small cuts, this can result in highly accurate classifiers with good accuracy/coverage tradeoffs, and can be given theoretical justification from both a Markov random field perspective and from sample complexity considerations. Expand
Label Propagation and Quadratic Criterion
TLDR
This chapter shows how different graph-based algorithms for semi-supervised learning can be cast into a common framework where one minimizes a quadratic cost criterion whose closed-form solution is found by solving a linear system of size n. Expand
Manifold Regularization: A Geometric Framework for Learning from Labeled and Unlabeled Examples
TLDR
A semi-supervised framework that incorporates labeled and unlabeled data in a general-purpose learner is proposed and properties of reproducing kernel Hilbert spaces are used to prove new Representer theorems that provide theoretical basis for the algorithms. Expand
New Regularized Algorithms for Transductive Learning
TLDR
This work proposes a new graph-based label propagation algorithm for transductive learning that can be extended to incorporate additional prior information, and demonstrates it with classifying data where the labels are not mutually exclusive. Expand
One-Shot Generalization in Deep Generative Models
TLDR
New deep generative models are developed, models that combine the representational power of deep learning with the inferential power of Bayesian reasoning, and are able to generate compelling and diverse samples, providing an important class of general-purpose models for one-shot machine learning. Expand
...
1
2
3
4
5
...