Corpus ID: 235432980

Graph Sparsification via Meta-Learning

@inproceedings{Wan2020GraphSV,
  title={Graph Sparsification via Meta-Learning},
  author={Guihong Wan and Harsha Kokel},
  year={2020}
}
We present a novel graph sparsification approach for semisupervised learning on undirected attributed graphs. The main challenge is to retain few edges while minimize the loss of node classification accuracy. The task can be mathematically formulated as a bi-level optimization problem. We propose to use meta-gradients, which have traditionally been used in meta-learning, to solve the optimization problem, essentially treating the graph adjacency matrix as hyperparameter to optimize… Expand

Figures from this paper

Deep Graph Structure Learning for Robust Representations: A Survey
TLDR
A general paradigm of Graph Structure Learning is formulated, and state-ofthe-art methods classified by how they model graph structures are reviewed, followed by applications that incorporate the idea of GSL in other graph tasks. Expand

References

SHOWING 1-10 OF 21 REFERENCES
Semi-Supervised Classification with Graph Convolutional Networks
TLDR
A scalable approach for semi-supervised learning on graph-structured data that is based on an efficient variant of convolutional neural networks which operate directly on graphs which outperforms related methods by a significant margin. Expand
Local graph sparsification for scalable clustering
TLDR
This paper proposes to rank edges using a simple similarity-based heuristic that is efficiently compute by comparing the minhash signatures of the nodes incident to the edge, to preferentially retain the edges that are likely to be part of the same cluster. Expand
Hierarchical graph attention networks for semi-supervised node classification
TLDR
A hierarchical graph attention network (HGAT) for semi-supervised node classification that employs a hierarchical mechanism for the learning of node features and can capture global structure information by increasing the receptive field, as well as the effective transfer of nodes features. Expand
Graph Convolutional Networks: Algorithms, Applications and Open Challenges
TLDR
A comprehensive review of the emerging field of graph convolutional networks, which is one of the most prominent graph deep learning models, and introduces two taxonomies to group the existing works based on the types of convolutions and the areas of applications. Expand
Adversarial Attacks on Graph Neural Networks via Meta Learning
TLDR
The core principle is to use meta-gradients to solve the bilevel problem underlying training-time attacks on graph neural networks for node classification that perturb the discrete graph structure, essentially treating the graph as a hyperparameter to optimize. Expand
A general framework for graph sparsification
TLDR
A key ingredient of the proofs is a natural generalization of Karger's bound on the number of small cuts in an undirected graph, which is likely to be of independent interest. Expand
Graph Attention Networks
We present graph attention networks (GATs), novel neural network architectures that operate on graph-structured data, leveraging masked self-attentional layers to address the shortcomings of priorExpand
Spectrum-preserving sparsification for visualization of big graphs
TLDR
A practically efficient, nearly-linear time spectral sparsification algorithm for tackling real-world big graph data and proposes a node reduction scheme based on intrinsic spectral graph properties to allow more aggressive, level-of-detail simplification. Expand
Structure-preserving sparsification methods for social networks
TLDR
The first systematic conceptual and experimental comparison of edge sparsification methods on a diverse set of network properties is contributed and it is shown that they can be understood as methods for rating edges by importance and then filtering globally or locally by these scores. Expand
A Comprehensive Survey on Graph Neural Networks
TLDR
This article provides a comprehensive overview of graph neural networks (GNNs) in data mining and machine learning fields and proposes a new taxonomy to divide the state-of-the-art GNNs into four categories, namely, recurrent GNNS, convolutional GNN’s, graph autoencoders, and spatial–temporal Gnns. Expand
...
1
2
3
...