Sub-graph Contrast for Scalable Self-Supervised Graph Representation Learning

@article{Jiao2020SubgraphCF,
  title={Sub-graph Contrast for Scalable Self-Supervised Graph Representation Learning},
  author={Yizhu Jiao and Yun Xiong and Jiawei Zhang and Yao Zhang and Tianqi Zhang and Yangyong Zhu},
  journal={2020 IEEE International Conference on Data Mining (ICDM)},
  year={2020},
  pages={222-231}
}
Graph representation learning has attracted lots of attention recently. Existing graph neural networks fed with the complete graph data are not scalable due to limited computation and memory costs. Thus, it remains a great challenge to capture rich information in large-scale graph data. Besides, these methods mainly focus on supervised learning and highly depend on node label information, which is expensive to obtain in the real world. As to unsupervised network embedding approaches, they… 

Figures and Tables from this paper

Graph Self-Supervised Learning: A Survey

A timely and comprehensive review of the existing approaches which employ SSL techniques for graph data and a unified framework that mathematically formalizes the paradigm of graph SSL is constructed.

Self-Supervised Learning of Graph Neural Networks: A Unified Review

A unified review of different ways of training GNNs using SSL methods into contrastive and predictive models is provided, which sheds light on the similarities and differences of various methods, setting the stage for developing new methods and algorithms.

Drug Target Prediction Using Graph Representation Learning via Substructures Contrast

This work merged heterogeneous graph information and obtained effective node information and substructure information based on mutual information in graph embeddings, and proposed an end–to–end auto–encoder model to complete the task of link prediction.

Social Influence Prediction with Train and Test Time Augmentation for Graph Neural Networks

The experimental results show that the end-to-end approach, which jointly trains a graph autoencoder and social influence behaviour classification network, can outperform state-of-the-art approaches, demonstrating the effectiveness of trainand test-time augmentation on graph neural networks for social influence prediction.

Quantifying and Mitigating Privacy Risks of Contrastive Learning

The first privacy analysis of contrastive learning through the lens of membership inference and attribute inference is performed, and it is shown that contrastive models trained on image datasets are less vulnerable to membership inference attacks but more vulnerable to attribute inference attacks compared to supervised models.

Generative Subgraph Contrast for Self-Supervised Graph Representation Learning

This paper proposes a novel adaptive subgraph generation based contrastive learning framework for efficient and robust self-supervised graph representation learning, and the optimal transport distance is utilized as the similarity metric between the subgraphs.

Self-Supervised Dynamic Graph Representation Learning via Temporal Subgraph Contrast

A self-supervised dynamic graph representation learning framework (DySubC) is proposed, which defines a temporal subgraph contrastive learning task to simultaneously learn the structural and evolutional features of a dynamic graph.

A Simple Yet Effective Pretraining Strategy for Graph Few-shot Learning

This work proposes a simple transductive fine-tuning based framework as a new paradigm for graph few-shot learning and proposes a supervised contrastive learning framework with data augmentation strategies for few- shot node classification to improve the extrapolation of a GNN encoder.

Self-Supervised Representation Learning via Latent Graph Prediction

The LaGraph is proposed, a theoretically grounded predictive SSL framework based on latent graph prediction that demonstrates the superiority of LaGraph in performance and the robustness to the decreasing training sample size on both graph-level and node-level tasks.

Towards Graph Self-Supervised Learning with Contrastive Adjusted Zooming

This work introduces a novel self-supervised graph representation learning algorithm via Graph Contrastive Adjusted Zooming, namely G-Zoom, to learn node representations by leveraging the proposed adjusted zooming scheme and demonstrates that the proposed model outperforms state-of-the-art methods consistently.
...

References

SHOWING 1-10 OF 37 REFERENCES

Graph-Bert: Only Attention is Needed for Learning Graph Representations

This paper introduces a new graph neural network, namely GRAPH-BERT (Graph based BERT), solely based on the attention mechanism without any graph convolution or aggregation operators, which can out-perform the existing GNNs in both the learning effectiveness and efficiency.

Deep Graph Infomax

Deep Graph Infomax (DGI) is presented, a general approach for learning node representations within graph-structured data in an unsupervised manner that is readily applicable to both transductive and inductive learning setups.

Representation Learning on Graphs: Methods and Applications

A conceptual review of key advancements in this area of representation learning on graphs, including matrix factorization-based methods, random-walk based algorithms, and graph neural networks are provided.

Graph Attention Networks

We present graph attention networks (GATs), novel neural network architectures that operate on graph-structured data, leveraging masked self-attentional layers to address the shortcomings of prior

Graph Representation Learning via Graphical Mutual Information Maximization

An unsupervised learning model trained by maximizing GMI between the input and output of a graph neural encoder is developed, which outperforms state-of-the-art unsuper supervised counterparts, and even sometimes exceeds the performance of supervised ones.

GraphSAINT: Graph Sampling Based Inductive Learning Method

GraphSAINT is proposed, a graph sampling based inductive learning method that improves training efficiency in a fundamentally different way and can decouple the sampling process from the forward and backward propagation of training, and extend GraphSAINT with other graph samplers and GCN variants.

Inductive Representation Learning on Large Graphs

GraphSAGE is presented, a general, inductive framework that leverages node feature information (e.g., text attributes) to efficiently generate node embeddings for previously unseen data and outperforms strong baselines on three inductive node-classification benchmarks.

FastGCN: Fast Learning with Graph Convolutional Networks via Importance Sampling

Enhanced with importance sampling, FastGCN not only is efficient for training but also generalizes well for inference, and is orders of magnitude more efficient while predictions remain comparably accurate.

How Powerful are Graph Neural Networks?

This work characterize the discriminative power of popular GNN variants, such as Graph Convolutional Networks and GraphSAGE, and show that they cannot learn to distinguish certain simple graph structures, and develops a simple architecture that is provably the most expressive among the class of GNNs.

Collective Link Prediction Oriented Network Embedding with Hierarchical Graph Attention

This paper proposes an application oriented network embedding framework, Hierarchical Graph Attention based Network Embedding (HGANE), for collective link prediction over directed aligned networks and introduces a hierarchical graph attention mechanism for the intra-network neighbors and inter-network partners respectively, which resolves the network characteristic differences and the link directivity challenges effectively.