BERTology Meets Biology: Interpreting Attention in Protein Language Models

@article{Vig2020BERTologyMB,
  title={BERTology Meets Biology: Interpreting Attention in Protein Language Models},
  author={Jesse Vig and Ali Madani and Lav R. Varshney and Caiming Xiong and Richard Socher and Nazneen Fatema Rajani},
  journal={bioRxiv},
  year={2020}
}
  • Jesse Vig, Ali Madani, +3 authors Nazneen Fatema Rajani
  • Published 2020
  • Biology, Computer Science
  • bioRxiv
  • Transformer architectures have proven to learn useful representations for protein classification and generation tasks. However, these representations present challenges in interpretability. Through the lens of attention, we analyze the inner workings of the Transformer and explore how the model discerns structural and functional properties of proteins. We show that attention (1) captures the folding structure of proteins, connecting amino acids that are far apart in the underlying sequence, but… CONTINUE READING

    Topics from this paper.

    References

    Publications referenced by this paper.
    SHOWING 1-10 OF 94 REFERENCES

    Amino acid substitution matrices from protein blocks.

    VIEW 2 EXCERPTS
    HIGHLY INFLUENTIAL

    Evaluating Protein Transfer Learning with TAPE

    VIEW 4 EXCERPTS
    HIGHLY INFLUENTIAL

    Revealing the Dark Secrets of BERT

    VIEW 5 EXCERPTS
    HIGHLY INFLUENTIAL

    A game theoretic approach to classwise selective rationalization

    • Shiyu Chang, Yang Zhang, Mo Yu, Tommi Jaakkola
    • In Advances in Neural Information Processing Systems,
    • 2019
    VIEW 1 EXCERPT