• Corpus ID: 246634115

Graph Self-supervised Learning with Accurate Discrepancy Learning

@article{Kim2022GraphSL,
  title={Graph Self-supervised Learning with Accurate Discrepancy Learning},
  author={Dongki Kim and Jinheon Baek and Sung Ju Hwang},
  journal={ArXiv},
  year={2022},
  volume={abs/2202.02989}
}
Self-supervised learning of graph neural networks (GNNs) aims to learn an accurate representation of the graphs in an unsupervised manner, to obtain transferable representations of them for diverse downstream tasks. Predictive learning and contrastive learning are the two most prevalent approaches for graph self-supervised learning. However, they have their own drawbacks. While the predictive learning methods can learn the contextual relationships between neighboring nodes and edges, they… 

Figures and Tables from this paper

On modeling and utilizing chemical compound information with deep learning technologies: A task-oriented approach

References

SHOWING 1-10 OF 56 REFERENCES

Hierarchical Graph Representation Learning with Differentiable Pooling

DiffPool is proposed, a differentiable graph pooling module that can generate hierarchical representations of graphs and can be combined with various graph neural network architectures in an end-to-end fashion.

Self-supervised Graph-level Representation Learning with Local and Global Structure

A unified framework called Local-instance and Global-semantic Learning (GraphLoG) for selfsupervised whole-graph representation learning, where besides preserving the local similarities, GraphLoG introduces the hierarchical prototypes to capture the global semantic clusters.

Graph Contrastive Learning with Adaptive Augmentation

This paper proposes a novel graph contrastive representation learning method with adaptive augmentation that incorporates various priors for topological and semantic aspects of the graph that consistently outperforms existing state-of-the-art baselines and even surpasses some supervised counterparts.

Self-supervised Auxiliary Learning with Meta-paths for Heterogeneous Graphs

This paper proposes a novel self-supervised auxiliary learning method using meta-paths, which are composite relations of multiple edge types, which can be viewed as a type of meta-learning to learn graph neural networks on heterogeneous graphs.

InfoGraph: Unsupervised and Semi-supervised Graph-Level Representation Learning via Mutual Information Maximization

Experimental results on the tasks of graph classification and molecular property prediction show that InfoGraph is superior to state-of-the-art baselines and InfoGraph* can achieve performance competitive with state- of- the-art semi-supervised models.

Graph Contrastive Learning Automated

A unified bilevel optimization framework to automatically, adaptively and dynamically select data augmentations when performing GraphCL on specific graph data is proposed and a new augmentationaware projection head mechanism, which will route output features through different projection heads corresponding to different augmentations chosen at each training step is proposed.

Graph Matching Networks for Learning the Similarity of Graph Structured Objects

This paper proposes a novel Graph Matching Network model that, given a pair of graphs as input, computes a similarity score between them by jointly reasoning on the pair through a new cross-graph attention-based matching mechanism.

Graph Contrastive Learning with Augmentations

The results show that, even without tuning augmentation extents nor using sophisticated GNN architectures, the GraphCL framework can produce graph representations of similar or better generalizability, transferrability, and robustness compared to state-of-the-art methods.

How to Find Your Friendly Neighborhood: Graph Attention Design with Self-Supervision

A self-supervised graph attention network (SuperGAT), an improved graph attention model for noisy graphs that generalizes across 15 datasets of them, and the models designed by recipe show improved performance over baselines.

Large-Scale Representation Learning on Graphs via Bootstrapping

Bootstrapped Graph Latents is introduced a graph representation learning method that learns by predicting alternative augmentations of the input and is thus scalable by design, and can be scaled up to extremely large graphs with hundreds of millions of nodes in the semi-supervised regime.
...