Message passing all the way up
@inproceedings{Velivckovic2022MessagePA, title={Message passing all the way up}, author={Petar Velivckovi'c}, year={2022} }
The message passing framework is the foundation of the immense success enjoyed by graph neural networks (GNNs) in recent years. In spite of its elegance, there exist many problems it provably cannot solve over given input graphs. This has led to a surge of research on going “beyond message passing”, building GNNs which do not suffer from those limitations—a term which has become ubiquitous in regular discourse. However, have those methods truly moved beyond message passing? In this position…
8 Citations
ReFactorGNNs: Revisiting Factorisation-based Models from a Message-Passing Perspective
- Computer ScienceArXiv
- 2022
This work bridges the gap between FMs and GNNs by proposing R E F ACTOR GNN S, a new architecture that achieves comparable transductive performance to FMs, and state-of-the-art inductive performance while using an order of magnitude fewer parameters.
Explainable Artificial Intelligence: An Updated Perspective
- Computer Science2022 45th Jubilee International Convention on Information, Communication and Electronic Technology (MIPRO)
- 2022
This research offers an update on the current state of explainable AI (XAI), identifying new frontiers of research, explainability of reinforcement learning and graph neural networks, and gives a detailed overview of the field.
Agent-based Graph Neural Networks
- Computer ScienceArXiv
- 2022
It is shown that the agents can learn to systematically explore their neighborhood and that AgentNet can distinguish some structures that are even indistinguishable by 3-WL, and that the network is able to separate any two graphs which are sufficiently different in terms of subgraphs.
DiffWire: Inductive Graph Rewiring via the Lovász Bound
- Computer ScienceArXiv
- 2022
DiffWire is proposed, a novel framework for graph rewiring in MPNNs that is principled, fully differentiable and parameter-free by leveraging the Lovász bound, and brings together the learnability of commute times to related definitions of curvature, opening the door to the development of more expressiveMPNNs.
Higher-Order Attention Networks
- Computer ScienceArXiv
- 2022
Higher-order attention networks are introduced, a novel class of attention-based neural networks defined on a generalized higher-order domain called a combinatorial complex (CC), which effectively generalize both hypergraphs and cell complexes and combine their desirable characteristics.
R EDUCING L EARNING ON C ELL C OMPLEXES TO G RAPHS
- Computer Science
- 2022
Cell encoding combined with WL or a suitably expressive GNN is at least as expressive as Cellular Weisfeiler Leman (CWL) in distinguishing cell complexes, which means that with a simple preprocessing one can use any GNN for learning tasks on cell complexes.
FlowGNN: A Dataflow Architecture for Universal Graph Neural Network Inference via Multi-Queue Streaming
- Computer ScienceArXiv
- 2022
A novel and scalable data architecture for GNN acceleration, named FlowGNN, which can support a wide range of GNN models with message-passing mechanism and delivers ultra-fast real-time GNN inference without any graph pre-processing, making it agnostic to dynamically changing graph structures.
References
SHOWING 1-10 OF 73 REFERENCES
Opinion Dynamics with Multi-body Interactions
- MathematicsNetGCooP
- 2020
It is shown that for systems with two clustered groups, already a small asymmetry in the dynamics can lead to the opinion of one group becoming clearly dominant, and how the system can otherwise be written as a linear, pairwise interaction system on a rescaled network.
Invariant and Equivariant Graph Networks
- Computer Science, MathematicsICLR
- 2019
This paper provides a characterization of all permutation invariant and equivariant linear layers for (hyper-)graph data, and shows that their dimension, in case of edge-value graph data, is 2 and 15, respectively.
Random Features Strengthen Graph Neural Networks
- Computer ScienceSDM
- 2021
It is proved that the random features enable GNNs to learn almost optimal polynomial-time approximation algorithms for the minimum dominating set problem and maximum matching problem in terms of the approximation ratio.
Relational Pooling for Graph Representations
- Computer ScienceICML
- 2019
This work generalizes graph neural networks (GNNs) beyond those based on the Weisfeiler-Lehman (WL) algorithm, graph Laplacians, and diffusions to provide a framework with maximal representation power for graphs.
Weisfeiler and Leman Go Neural: Higher-order Graph Neural Networks
- Computer ScienceAAAI
- 2019
It is shown that GNNs have the same expressiveness as the Weisfeiler-Leman graph isomorphism heuristic in terms of distinguishing non-isomorphic (sub-)graphs, and a generalization of GNN's is proposed, so-called $k$-dimensional GNNS ($k-GNNs), which can take higher-order graph structures at multiple scales into account.
Neural Message Passing for Quantum Chemistry
- Computer ScienceICML
- 2017
Using MPNNs, state of the art results on an important molecular property prediction benchmark are demonstrated and it is believed future work should focus on datasets with larger molecules or more accurate ground truth labels.
Long Short-Term Memory
- Computer ScienceNeural Computation
- 1997
A novel, efficient, gradient based method called long short-term memory (LSTM) is introduced, which can learn to bridge minimal time lags in excess of 1000 discrete-time steps by enforcing constant error flow through constant error carousels within special units.
Geometric Deep Learning: Grids, Groups, Graphs, Geodesics, and Gauges
- Computer ScienceArXiv
- 2021
A 'geometric unification' endeavour that provides a common mathematical framework to study the most successful neural network architectures, such as CNNs, RNN's, GNNs, and Transformers, and gives a constructive procedure to incorporate prior physical knowledge into neural architectures and provide principled way to build future architectures yet to be invented.
HEAT: Hyperedge Attention Networks
- Computer ScienceArXiv
- 2022
This work presents HEAT, a neural model capable of representing typed and qualified hypergraphs, where each hyperedge explicitly qualifies how participating nodes contribute, which can be viewed as a generalization of both message passing neural networks and Transformers.
Weisfeiler and Leman go Machine Learning: The Story so far
- Education, Computer ScienceArXiv
- 2021
This research presents a parallel version of the TSP called TSP “TSP2” that was developed at the proofs stage at the University of California, Berkeley with real-time constraints.