Corpus ID: 102353807

Visualizing Attention in Transformer-Based Language models

@inproceedings{Vig2019VisualizingAI,
  title={Visualizing Attention in Transformer-Based Language models},
  author={Jesse Vig},
  year={2019}
}
  • Jesse Vig
  • Published 2019
  • Computer Science, Mathematics
  • We present an open-source tool for visualizing multi-head self-attention in Transformer-based language models. The tool extends earlier work by visualizing attention at three levels of granularity: the attention-head level, the model level, and the neuron level. We describe how each of these views can help to interpret the model, and we demonstrate the tool on the OpenAI GPT-2 pretrained language model. We also present three use cases showing how the tool might provide insights on how to adapt… CONTINUE READING

    Figures and Topics from this paper.

    Visualizing and Measuring the Geometry of BERT
    62
    BERT-CNN: a Hierarchical Patent Classifier Based on a Pre-Trained Language Model

    References

    Publications referenced by this paper.
    SHOWING 1-10 OF 11 REFERENCES
    Seq2seq-Vis: A Visual Debugging Tool for Sequence-to-Sequence Models
    73
    Language Models are Unsupervised Multitask Learners
    1522
    Attention is All you Need
    10753
    Identifying and Controlling Important Neurons in Neural Machine Translation
    44
    Gender Bias in Neural Natural Language Processing
    36
    Tensor2Tensor for Neural Machine Translation
    235
    Gender Bias in Coreference Resolution: Evaluation and Debiasing Methods
    118