On the Equivalence of Decoupled Graph Convolution Network and Label Propagation

@article{Dong2021OnTE,
  title={On the Equivalence of Decoupled Graph Convolution Network and Label Propagation},
  author={Hande Dong and Jiawei Chen and Fuli Feng and Xiangnan He and Shuxian Bi and Zhaolin Ding and Peng Cui},
  journal={Proceedings of the Web Conference 2021},
  year={2021}
}
The original design of Graph Convolution Network (GCN) couples feature transformation and neighborhood aggregation for node representation learning. Recently, some work shows that coupling is inferior to decoupling, which supports deep graph propagation better and has become the latest paradigm of GCN (e.g., APPNP [16] and SGCN [32]). Despite effectiveness, the working mechanisms of the decoupled GCN are not well understood. In this paper, we explore the decoupled GCN for semi-supervised node… 

Propagation with Adaptive Mask then Training for Node Classification on Attributed Networks

This work proposes a new method called the Propagation with Adaptive Mask then Training (PAMT), which could preserve the attribute correlation of adjacent nodes during the propagation and reduce the structure noise in the structure-aware propagation process.

RIM: Reliable Influence-based Active Learning on Graphs

This paper proposes to unify active learning (AL) and message passing towards minimizing labeling costs, and derives a fundamentally new AL selection criterion for GCN and LP– reliable influence maximization (RIM)–by considering quantity and quality of in-uence simultaneously.

Learning with Few Labeled Nodes via Augmented Graph Self-Training

This work proposes a new graph data augmentation framework, AGST (Augmented Graph Self-Training), which is built with two new (i.e., structural and semantic) augmentation modules on top of a decoupled GST backbone and investigates whether this novel framework can learn an effective graph predictive model with extremely limited labeled nodes.

Regularizing Graph Neural Networks via Consistency-Diversity Graph Augmentations

This paper analyzes two representative semi-supervised learning algorithms: label propagation (LP) and consistency regularization (CR) and finds that LP utilizes the prior knowledge of graphs to improve consistency and CR adopts variable augmentations to promote diversity.

Weakly-supervised Graph Meta-learning for Few-shot Node Classification

Based on a new robustness-enhanced episodic training, Meta-GHN is meta-learned to hallucinate clean node representations from weakly-labeled data and extracts highly transferable meta-knowledge, which enables the model to quickly adapt to unseen tasks with few labeled instances.

Robust Graph Meta-learning for Weakly-supervised Few-shot Node Classification

A new graph meta-learning framework based on a new robustness-enhanced episodic training paradigm, Meta-GIN is meta-learned to in- terpolate node representations from weakly-labeled data and extracts highly transferable meta-knowledge, which enables the model to quickly adapt to unseen tasks with few labeled instances.

Accurate and Scalable Graph Neural Networks for Billion-Scale Graphs

This paper proposes a novel scalable and effective GNN framework COSAL, which substitutes the expensive aggregation with an efficient proximate node selection mechanism, which picks out the most important nodes for each target node according to the graph topology, and proposes a fine-grained neighbor importance quantification strategy to enhance the expressive power of CosAL.

Deep Manifold Learning with Graph Mining

A novel graph deep model with a non-gradient decision layer for graph mining and a joint optimization method is designed for this graph model, which extremely accelerates the convergence of the model.

Alternately Optimized Graph Neural Networks

This work proposes an alternating optimization framework for graph neural networks that does not require end-to-end training and shows that the performance of the proposed algorithm is comparable to existing state-of-the-art algorithms but has better computation and memory consumption.

A Graph Diffusion Scheme for Decentralized Content Search based on Personalized PageRank

This work generates latent representations of P2P nodes based on their stored documents and diffuse them to the rest of the network with graph signal processing, such as person- alized PageRank, and uses the diffused representations to guide search queries towards relevant content.

References

SHOWING 1-10 OF 48 REFERENCES

Unifying Graph Convolutional Neural Networks and Label Propagation

This work proposes an end-to-end model that unifies GCN and LPA for node classification, and shows superiority over state-of-the-art GCN-based methods in terms of node classification accuracy.

Deeper Insights into Graph Convolutional Networks for Semi-Supervised Learning

It is shown that the graph convolution of the GCN model is actually a special form of Laplacian smoothing, which is the key reason why GCNs work, but it also brings potential concerns of over-smoothing with many convolutional layers.

Dual Graph Convolutional Networks for Graph-Based Semi-Supervised Classification

This paper presents a simple and scalable semi-supervised learning method for graph-structured data in which only a very small portion of the training data are labeled, and introduces an unsupervised temporal loss function for the ensemble.

LightGCN: Simplifying and Powering Graph Convolution Network for Recommendation

This work proposes a new model named LightGCN, including only the most essential component in GCN -- neighborhood aggregation -- for collaborative filtering, and is much easier to implement and train, exhibiting substantial improvements over Neural Graph Collaborative Filtering (NGCF) under exactly the same experimental setting.

N-GCN: Multi-scale Graph Convolution for Semi-supervised Node Classification

The proposed N-GCN model improves state-of-the-art baselines on all of the challenging node classification tasks the authors consider: Cora, Citeseer, Pubmed, and PPI, and has other desirable properties, including generalization to recently proposed semi-supervised learning methods such as GraphSAGE, and resilience to adversarial input perturbations.

Towards Deeper Graph Neural Networks

This work provides a systematical analysis and theoretical analysis of the over-smoothing issue and proposes Deep Adaptive Graph Neural Network (DAGNN) to adaptively incorporate information from large receptive fields to learn graph node representations from larger receptive fields.

An End-to-End Deep Learning Architecture for Graph Classification

This paper designs a localized graph convolution model and shows its connection with two graph kernels, and designs a novel SortPooling layer which sorts graph vertices in a consistent order so that traditional neural networks can be trained on the graphs.

Measuring and Relieving the Over-smoothing Problem for Graph Neural Networks from the Topological View

Two methods to alleviate the over-smoothing issue of GNNs are proposed: MADReg which adds a MADGap-based regularizer to the training objective; AdaEdge which optimizes the graph topology based on the model predictions.

Predict then Propagate: Graph Neural Networks meet Personalized PageRank

This paper uses the relationship between graph convolutional networks (GCN) and PageRank to derive an improved propagation scheme based on personalized PageRank, and constructs a simple model, personalized propagation of neural predictions (PPNP), and its fast approximation, APPNP.

Graph Attention Networks

We present graph attention networks (GATs), novel neural network architectures that operate on graph-structured data, leveraging masked self-attentional layers to address the shortcomings of prior