Boosting Graph Structure Learning with Dummy Nodes

@inproceedings{Liu2022BoostingGS,
  title={Boosting Graph Structure Learning with Dummy Nodes},
  author={Xin Liu and Jiayang Cheng and Yangqiu Song and Xin Jiang},
  booktitle={International Conference on Machine Learning},
  year={2022}
}
With the development of graph kernels and graph representation learning, many superior methods have been proposed to handle scalability and oversmoothing issues on graph structure learning. However, most of those strategies are designed based on practical experience rather than theoretical analysis. In this paper, we use a particular dummy node connecting to all existing vertices without affecting original vertex and edge properties. We further prove that such the dummy node can help build an… 

Figures and Tables from this paper

Natural and Artificial Dynamics in Graphs: Concept, Progress, and Future

The definitions of natural dynamics and artificial dynamics in graphs are introduced, and the related works of natural andificial dynamics about how they boost the aforementioned graph research topics are discussed.

Predicting Protein-Ligand Binding Affinity with Equivariant Line Graph Network

Experimental results on two real datasets demonstrate the effectiveness of ELGN over several state-of-the-art baselines, and show that ELGN surpasses previous methods with better effectiveness and generalizability.

References

SHOWING 1-10 OF 44 REFERENCES

Hierarchical Graph Pooling with Structure Learning

A novel graph pooling operator, called Hierarchical Graph Pooling with Structure Learning (HGP-SL), which can be integrated into various graph neural network architectures, and introduces a structure learning mechanism to learn a refined graph structure for the pooled graph at each layer.

Hierarchical Graph Representation Learning with Differentiable Pooling

DiffPool is proposed, a differentiable graph pooling module that can generate hierarchical representations of graphs and can be combined with various graph neural network architectures in an end-to-end fashion.

How Powerful are Graph Neural Networks?

This work characterize the discriminative power of popular GNN variants, such as Graph Convolutional Networks and GraphSAGE, and show that they cannot learn to distinguish certain simple graph structures, and develops a simple architecture that is provably the most expressive among the class of GNNs.

Composition-based Multi-Relational Graph Convolutional Networks

This paper proposes CompGCN, a novel Graph Convolutional framework which jointly embeds both nodes and relations in a relational graph and leverages a variety of entity-relation composition operations from Knowledge Graph Embedding techniques and scales with the number of relations.

Graph Convolutional Networks with Dual Message Passing for Subgraph Isomorphism Counting and Matching

It is proved that searching isomorphisms on the original graph is equivalent to searching on its dual graph and proposed dual message passing neural networks (DMPNNs) to enhance the substructure representation learning in an asynchronous way for subgraph isomorphism counting and matching as well as unsupervised node classification.

Deeper Insights into Graph Convolutional Networks for Semi-Supervised Learning

It is shown that the graph convolution of the GCN model is actually a special form of Laplacian smoothing, which is the key reason why GCNs work, but it also brings potential concerns of over-smoothing with many convolutional layers.

Heterogeneous Graph Transformer

The proposed HGT model consistently outperforms all the state-of-the-art GNN baselines by 9–21 on various downstream tasks, and the heterogeneous mini-batch graph sampling algorithm—HGSampling—for efficient and scalable training.

Node2Seq: Towards Trainable Convolutions in Graph Neural Networks

Experimental results demonstrate the effectiveness of the proposed Node2Seq layer and show that the proposed adaptively non-local information learning can improve the performance of feature learning.

Inductive Representation Learning on Large Graphs

GraphSAGE is presented, a general, inductive framework that leverages node feature information (e.g., text attributes) to efficiently generate node embeddings for previously unseen data and outperforms strong baselines on three inductive node-classification benchmarks.

Neural Subgraph Isomorphism Counting

Experimental results show that learning based subgraph isomorphism counting can speed up the traditional algorithm, VF2, 10-1,000 times with acceptable errors and Domain adaptation based on fine-tuning also shows the usefulness of the approach in real-world applications.