Meta Propagation Networks for Graph Few-shot Semi-supervised Learning

@inproceedings{Ding2022MetaPN,
  title={Meta Propagation Networks for Graph Few-shot Semi-supervised Learning},
  author={Kaize Ding and Jianling Wang and James Caverlee and Huan Liu},
  booktitle={AAAI},
  year={2022}
}
Inspired by the extensive success of deep learning, graph neural networks (GNNs) have been proposed to learn expressive node representations and demonstrated promising performance in various graph learning tasks. However, existing endeavors predominately focus on the conventional semi-supervised setting where relatively abundant gold-labeled nodes are provided. While it is often impractical due to the fact that data labeling is unbearably laborious and requires intensive domain knowledge… 

Figures and Tables from this paper

Few-Shot Learning on Graphs
  • Chuxu Zhang, Kaize Ding, Huan Liu
  • Computer Science
    Proceedings of the Thirty-First International Joint Conference on Artificial Intelligence
  • 2022
TLDR
This paper comprehensively surveyes existing work of FSLG in terms of three major graph mining tasks at different granularity levels, i.e., node, edge, and graph.
Data Augmentation for Deep Graph Learning: A Survey
TLDR
A taxonomy for graph data augmentation is proposed and a structured review by categorizing the related work based on the augmented information modalities is provided, focusing on the two challenging problems in DGL (i.e., optimal graph learning and low-resource graph learning).
Structural and Semantic Contrastive Learning for Self-supervised Node Representation Learning
TLDR
This work proposes a simple yet effective framework – Simple Neural Networks with Structural and Semantic Contrastive Learning (S 3 -CL), which proves that even a simple neural network is able to learn expressive node representations that preserve valuable global structural and semantic patterns.

References

SHOWING 1-10 OF 42 REFERENCES
Graph Few-shot Learning via Knowledge Transfer
TLDR
This work innovatively proposes a graph few-shot learning (GFL) algorithm that incorporates prior knowledge learned from auxiliary graphs to improve classification accuracy on the target graph.
Graph Few-shot Learning with Attribute Matching
TLDR
The proposed AMM-GNN leverages an attribute-level attention mechanism to capture the distinct information of each task and thus learns more effective transferable knowledge for meta-learning.
Meta-GNN: On Few-shot Node Classification in Graph Meta-learning
TLDR
This work proposes a novel graph meta-learning framework -- Meta-GNN -- that obtains the prior knowledge of classifiers by training on many similar few-shot learning tasks and then classifies the nodes from new classes with only few labeled samples, and which learns a more general and flexible model for task adaption.
Deeper Insights into Graph Convolutional Networks for Semi-Supervised Learning
TLDR
It is shown that the graph convolution of the GCN model is actually a special form of Laplacian smoothing, which is the key reason why GCNs work, but it also brings potential concerns of over-smoothing with many convolutional layers.
Label Efficient Semi-Supervised Learning via Graph Filtering
TLDR
This paper proposes a graph filtering framework that injects graph similarity into data features by taking them as signals on the graph and applying a low-pass graph filter to extract useful data representations for classification, where label efficiency can be achieved by conveniently adjusting the strength of the graph filter.
Data Augmentation for Deep Graph Learning: A Survey
TLDR
A taxonomy for graph data augmentation is proposed and a structured review by categorizing the related work based on the augmented information modalities is provided, focusing on the two challenging problems in DGL (i.e., optimal graph learning and low-resource graph learning).
On the Equivalence of Decoupled Graph Convolution Network and Label Propagation
TLDR
A new label propagation method named Propagation then Training Adaptively (PTA) is proposed, which overcomes the flaws of the decoupled GCN with a dynamic and adaptive weighting strategy.
Graph Prototypical Networks for Few-shot Learning on Attributed Networks
TLDR
By constructing a pool of semi-supervised node classification tasks to mimic the real test environment, GPN is able to perform meta-learning on an attributed network and derive a highly generalizable model for handling the target classification task.
Simple and Deep Graph Convolutional Networks
TLDR
The GCNII is proposed, an extension of the vanilla GCN model with two simple yet effective techniques: {\em Initial residual} and {\em Identity mapping} that effectively relieves the problem of over-smoothing.
Pseudo-Labeling and Confirmation Bias in Deep Semi-Supervised Learning
TLDR
This work shows that a naive pseudo-labeling overfits to incorrect pseudo-labels due to the so-called confirmation bias and demonstrates that mixup augmentation and setting a minimum number of labeled samples per mini-batch are effective regularization techniques for reducing it.
...
...