• Computer Science
  • Published in ArXiv 2019

exBERT: A Visual Analysis Tool to Explore Learned Representations in Transformers Models

@article{Hoover2019exBERTAV,
  title={exBERT: A Visual Analysis Tool to Explore Learned Representations in Transformers Models},
  author={Benjamin Hoover and Hendrik Strobelt and Sebastian Gehrmann},
  journal={ArXiv},
  year={2019},
  volume={abs/1910.05276}
}
Large language models can produce powerful contextual representations that lead to improvements across many NLP tasks. Since these models are typically guided by a sequence of learned self attention mechanisms and may comprise undesired inductive biases, it is paramount to be able to explore what the attention has learned. While static analyses of these models lead to targeted insights, interactive tools are more dynamic and can help humans better gain an intuition for the model-internal… CONTINUE READING

Figures and Topics from this paper.

References

Publications referenced by this paper.
SHOWING 1-10 OF 23 REFERENCES

Attention is All you Need

VIEW 4 EXCERPTS
HIGHLY INFLUENTIAL

Attention is not Explanation

VIEW 1 EXCERPT