Approximate Graph Propagation

@article{Wang2021ApproximateGP,
  title={Approximate Graph Propagation},
  author={Hanzhi Wang and Mingguo He and Zhewei Wei and Sibo Wang and Ye Yuan and Xiaoyong Du and Ji-rong Wen},
  journal={Proceedings of the 27th ACM SIGKDD Conference on Knowledge Discovery \& Data Mining},
  year={2021}
}
  • Hanzhi Wang, Mingguo He, Ji-rong Wen
  • Published 6 June 2021
  • Computer Science
  • Proceedings of the 27th ACM SIGKDD Conference on Knowledge Discovery & Data Mining
Efficient computation of node proximity queries such as transition probabilities, Personalized PageRank, and Katz are of fundamental importance in various graph mining and learning tasks. In particular, several recent works leverage fast node proximity computation to improve the scalability of Graph Neural Networks (GNN). However, prior studies on proximity computation and GNN feature propagation are on a case-by-case basis, with each paper focusing on a particular proximity measure. In this… 

Figures and Tables from this paper

SCARA: Scalable Graph Neural Networks with Feature-Oriented Optimization
TLDR
Theoretical analysis indicates that SCARA achieves sub-linear time complexity with a guaranteed precision in propagation process as well as GNN training and inference, and it is efficient to process precomputation on the largest available billion-scale GNN dataset Papers100M in 100 seconds.
Instant Graph Neural Networks for Dynamic Graphs
TLDR
This paper proposes Instant Graph Neural Network (InstantGNN), an incremental computation approach for the graph representation matrix of dynamic graphs, set to work with dynamic graphs with the edge-arrival model, which avoids timeconsuming, repetitive computations and allows instant updates on the representation and instant predictions.
Accurate and Scalable Graph Neural Networks for Billion-Scale Graphs
TLDR
This paper proposes a novel scalable and effective GNN framework COSAL, which substitutes the expensive aggregation with an efficient proximate node selection mechanism, which picks out the most important nodes for each target node according to the graph topology, and proposes a fine-grained neighbor importance quantification strategy to enhance the expressive power of CosAL.
Self-supervised end-to-end graph local clustering
  • Zhe Yuan
  • Computer Science
    World Wide Web
  • 2022
Efficient Personalized PageRank Computation: A Spanning Forests Sampling Based Approach
TLDR
This paper proposes several novel algorithms to efficiently compute the personalized PageRank vector with a decay factor α based on an interesting connection between the customized PageRank values and the weights of random spanning forests of the graph based on a newly-developed matrix forest theorem on graphs.
Learning Optimal Propagation for Graph Neural Networks
TLDR
This paper proposes a bi-level optimization-based approach for learning the optimal graph structure via directly learning the Personalized PageRank propagation matrix as well as the downstream semi-supervised node classification simultaneously and explores a low-rank approximation model for further reducing the time complexity.

References

SHOWING 1-10 OF 41 REFERENCES
Efficient Estimation of Heat Kernel PageRank for Local Clustering
TLDR
TEA and TEA+, two novel local graph clustering algorithms based on heat kernel PageRank that provide non-trivial theoretical guarantees in relative error of HKPR values and the time complexity and outperforms the state-of-the-art algorithm by more than four times on most benchmark datasets.
Scaling Graph Neural Networks with Approximate PageRank
TLDR
The PPRGo model is presented, which utilizes an efficient approximation of information diffusion in GNNs resulting in significant speed gains while maintaining state-of-the-art prediction performance, and the practical application of PPR go to solve large-scale node classification problems at Google.
Predict then Propagate: Graph Neural Networks meet Personalized PageRank
TLDR
This paper uses the relationship between graph convolutional networks (GCN) and PageRank to derive an improved propagation scheme based on personalized PageRank, and constructs a simple model, personalized propagation of neural predictions (PPNP), and its fast approximation, APPNP.
Diffusion Improves Graph Learning
TLDR
This work removes the restriction of using only the direct neighbors by introducing a powerful, yet spatially localized graph convolution: Graph diffusion convolution (GDC), which leverages generalized graph diffusion and alleviates the problem of noisy and often arbitrarily defined edges in real graphs.
Efficient Processing of Network Proximity Queries via Chebyshev Acceleration
TLDR
This paper presents an alternate approach to acceleration of network proximity queries using Chebyshev polynomials, called CHOPPER, which yields asymptotically faster convergence in theory, and significantly reduced convergence times in practice.
HubPPR: Effective Indexing for Approximate Personalized PageRank
TLDR
HubPPR is proposed, an effective indexing scheme for PPR computation with controllable tradeoffs for accuracy, query time, and memory consumption, and extended to answer top-k PPR queries, which returns the k nodes with the highest PPR values with respect to a source s, among a given set T of target nodes.
TopPPR: Top-k Personalized PageRank Queries with Precision Guarantees on Large Graphs
TLDR
PPR is proposed, an algorithm for top-k PPR queries that ensure at least ρ precision with at least 1 - 1/n probability, where ρ ∈;n (0, 1] is a user-specified parameter and n is the number of nodes in G.
FORA: Simple and Effective Approximate Single-Source Personalized PageRank
TLDR
The basic idea of FORA is to combine two existing methods Forward Push and Monte Carlo Random Walk in a simple and yet non-trivial way, leading to an algorithm that is both fast and accurate.
Personalized PageRank to a Target Node, Revisited
TLDR
RBS is proposed, a novel algorithm that answers approximate single-target queries with optimal computational complexity and improves three concrete applications: heavy hitters PPR query, single-source SimRank computation, and scalable graph neural networks.
BEAR: Block Elimination Approach for Random Walk with Restart on Large Graphs
TLDR
BEAR is proposed, a fast, scalable, and accurate method for computing RWR on large graphs that significantly outperforms other state-of-the-art methods in terms of preprocessing and query speed, space efficiency, and accuracy.
...
...