Equiformer: Equivariant Graph Attention Transformer for 3D Atomistic Graphs

  title={Equiformer: Equivariant Graph Attention Transformer for 3D Atomistic Graphs},
  author={Yi Liao and Tess E. Smidt},
3D-related inductive biases like translational invariance and rotational equivariance are indispensable to graph neural networks operating on 3D atomistic graphs such as molecules. Inspired by the success of Transformers in various domains, we study how to incorporate these inductive biases into Transformers. In this paper, we present Equiformer, a graph neural network leveraging the strength of Transformer architectures and incorporating SE (3) / E (3) -equivariant features based on… 

Holographic-(V)AE: an end-to-end SO(3)-Equivariant (Variational) Autoencoder in Fourier Space

Holographic-( V)AE (H-(V)AE), a fully end-to-end SO(3)-equivariant (variational) autoencoder in Fourier space, suitable for unsupervised learning and generation of data distributed around a specified origin.

EquiFold: Protein Structure Prediction with a Novel Coarse-Grained Structure Representation

EquiFold is introduced, a new end-to-end differentiable, SE(3)-equivariant, all-atom protein structure prediction model that uses a novel coarse-grained representation of protein structures that does not require multiple sequence alignments or protein language model embeddings, inputs that are commonly used in other state-of-the-art structure prediction models.

Forces are not Enough: Benchmark and Critical Evaluation for Machine Learning Force Fields with Molecular Simulations

A novel benchmark suite for ML MD simulation is introduced, identifying stability as a key metric for ML models to improve and illustrating, in particular, how the commonly benchmarked force accuracy is not well aligned with relevant simulation metrics.

Protein Language Models and Structure Prediction: Connection and Progression

The similarities between protein and human languages that allow LMs extended to pLMs, and applied to protein databases are introduced and the types of methods for PSP are discussed, particularly how the pLM-based architectures function in the process of protein folding.



Equivariant Graph Attention Networks for Molecular Property Prediction

This work proposes an equivariant GNN that operates with Cartesian coordinates to incorporate directionality and implements a novel attention mechanism, acting as a content and spatial dependent filter when propagating information between nodes.

Relevance of Rotationally Equivariant Convolutions for Predicting Molecular Properties

This paper finds that for fixed network depth, adding angular features improves the accuracy on most targets, and beats previous state-of-the-art results on the global electronic properties dipole moment, isotropic polarizability, and electronic spatial extent.

SE(3)-Transformers: 3D Roto-Translation Equivariant Attention Networks

The SE(3)-Transformer is introduced, a variant of the self-attention module for 3D point clouds, which is equivariant under continuous 3D roto-translations, which achieves competitive performance on two real-world datasets, ScanObjectNN and QM9.

GemNet: Universal Directional Graph Neural Networks for Molecules

This work shows that GNNs with directed edge embeddings and two-hop message passing are indeed universal approximators for predictions that are invariant to translation, and equivariant to permutation and rotation, and proposes the geometric message passing neural network (GemNet).

E(3)-equivariant graph neural networks for data-efficient and accurate interatomic potentials

The NequIP method achieves state-of-the-art accuracy on a challenging and diverse set of molecules and materials while exhibiting remarkable data efficiency, challenging the widely held belief that deep neural networks require massive training sets.

Rethinking Graph Transformers with Spectral Attention

The Spectral Attention Network (SAN), which uses a learned positional encoding (LPE) that can take advantage of the full Laplacian spectrum to learn the position of each node in a given graph, is presented, becoming the first fully-connected architecture to perform well on graph benchmarks.

Generalizing Convolutional Neural Networks for Equivariance to Lie Groups on Arbitrary Continuous Data

A general method to construct a convolutional layer that is equivariant to transformations from any specified Lie group with a surjective exponential map is proposed, enabling rapid prototyping and exact conservation of linear and angular momentum.

TorchMD-NET: Equivariant Transformers for Neural Network based Molecular Potentials

The prediction of quantum mechanical properties is historically plagued by a trade-off between accuracy and speed. Machine learning potentials have previously shown great success in this domain,

Equivariant message passing for the prediction of tensorial properties and molecular spectra

This work proposes the polarizable atom interaction neural network (PAINN) and improves on common molecule benchmarks over previous networks, while reducing model size and inference time and leverage the equivariant atomwise representations obtained by PAINN for the prediction of tensorial properties.

Graph Attention Networks

We present graph attention networks (GATs), novel neural network architectures that operate on graph-structured data, leveraging masked self-attentional layers to address the shortcomings of prior