• Corpus ID: 238259057

3D-Transformer: Molecular Representation with Transformer in 3D Space

@article{Wu20213DTransformerMR,
  title={3D-Transformer: Molecular Representation with Transformer in 3D Space},
  author={Fang Wu and Qiang Zhang and Dragomir Radev and Jiyu Cui and Wen Zhang and Huabin Xing and Ningyu Zhang and Huajun Chen},
  journal={ArXiv},
  year={2021},
  volume={abs/2110.01191}
}
Spatial structures in the 3D space are important to determine molecular properties. Recent papers use geometric deep learning to represent molecules and predict properties. These papers, however, are computationally expensive in capturing long-range dependencies of input atoms; and more importantly, they have not considered the non-uniformity of interatomic distances, thus failing to learn context-dependent representations at different scales. To deal with such issues, we introduce 3D… 
Geometric Transformer for End-to-End Molecule Properties Prediction
TLDR
This work introduces a Transformer-based architecture for molecule property prediction, which is able to capture the geometry of the molecule, and modify the classical positional encoder by an initial encoding of the molecules geometry.

References

SHOWING 1-10 OF 86 REFERENCES
Three-Dimensionally Embedded Graph Convolutional Network (3DGCN) for Molecule Interpretation
TLDR
This work proposes an advanced derivative of GCNs, coined a 3DGCN (three-dimensionally embedded graph convolutional network), which takes molecular graphs embedded in three-dimensional Euclidean space as inputs and recursively updates the scalar and vector features based on the relative positions of nodes.
SchNet - A deep learning architecture for molecules and materials.
TLDR
The deep learning architecture SchNet is presented that is specifically designed to model atomistic systems by making use of continuous-filter convolutional layers and employs SchNet to predict potential-energy surfaces and energy-conserving force fields for molecular dynamics simulations of small molecules.
Spherical Message Passing for 3D Graph Networks
TLDR
This work proposes the spherical message passing (SMP) as a novel and specific scheme for realizing the 3DGN framework in the spherical coordinate system (SCS), and derives physically-based representations of geometric information and proposes the SphereNet for learning representations of 3D graphs.
ChemBERTa: Large-Scale Self-Supervised Pretraining for Molecular Property Prediction
TLDR
This work makes one of the first attempts to systematically evaluate transformers on molecular property prediction tasks via the ChemBERTa model, and suggests that transformers offer a promising avenue of future work for molecular representation learning and property prediction.
Molecular Property Prediction: A Multilevel Quantum Interactions Modeling Perspective
TLDR
A generalizable and transferable Multilevel Graph Convolutional neural Network (MGCN) for molecular property prediction, which represents each molecule as a graph to preserve its internal structure and directly extracts features from the conformation and spatial information followed by the multilevel interactions.
Rotation Invariant Graph Neural Networks using Spin Convolutions
TLDR
A novel approach to modeling angular information between sets of neighboring atoms in a graph neural network is introduced and rotation invariance is achieved for the network’s edge messages through the use of a per-edge local coordinate frame and a novel spin convolution over the remaining degree of freedom.
Crystal Graph Convolutional Neural Networks for an Accurate and Interpretable Prediction of Material Properties.
TLDR
A crystal graph convolutional neural networks framework to directly learn material properties from the connection of atoms in the crystal, providing a universal and interpretable representation of crystalline materials.
Quantum-chemical insights from deep tensor neural networks
TLDR
An efficient deep learning approach is developed that enables spatially and chemically resolved insights into quantum-mechanical observables of molecular systems, and unifies concepts from many-body Hamiltonians with purpose-designed deep tensor neural networks, which leads to size-extensive and uniformly accurate chemical space predictions.
PointNet: Deep Learning on Point Sets for 3D Classification and Segmentation
TLDR
This paper designs a novel type of neural network that directly consumes point clouds, which well respects the permutation invariance of points in the input and provides a unified architecture for applications ranging from object classification, part segmentation, to scene semantic parsing.
End-to-End Learning on 3D Protein Structure for Interface Prediction
TLDR
The first end-to-end learning model for protein interface prediction, the Siamese Atomic Surfacelet Network (SASNet), is developed and it is found that SASNet outperforms state-of-the-art methods trained on gold-standard structural data, even when trained on only 3% of the new dataset.
...
1
2
3
4
5
...