BERTology Meets Biology: Interpreting Attention in Protein Language Models

@article{Vig2021BERTologyMB,
  title={BERTology Meets Biology: Interpreting Attention in Protein Language Models},
  author={Jesse Vig and Ali Madani and L. Varshney and Caiming Xiong and R. Socher and Nazneen Rajani},
  journal={bioRxiv},
  year={2021}
}
Transformer architectures have proven to learn useful representations for protein classification and generation tasks. However, these representations present challenges in interpretability. Through the lens of attention, we analyze the inner workings of the Transformer and explore how the model discerns structural and functional properties of proteins. We show that attention (1) captures the folding structure of proteins, connecting amino acids that are far apart in the underlying sequence, but… Expand
Visualizing Transformers for NLP: A Brief Survey
Transformer protein language models are unsupervised structure learners
Fixed-Length Protein Embeddings using Contextual Lenses
Language Models are Open Knowledge Graphs
...
1
2
3
4
...

References

SHOWING 1-10 OF 95 REFERENCES
Generative Models for Graph-Based Protein Design
Learning protein sequence embeddings using information from structure
Accelerating Protein Design Using Autoregressive Generative Models
ProGen: Language Modeling for Protein Generation
Evaluating Protein Transfer Learning with TAPE
ProteinNet: a standardized data set for machine learning of protein structure
NetSurfP-2.0: improved prediction of protein structural features by integrated deep learning
...
1
2
3
4
5
...