AllenNLP Interpret: A Framework for Explaining Predictions of NLP Models

@inproceedings{Wallace2019AllenNLPIA,
  title={AllenNLP Interpret: A Framework for Explaining Predictions of NLP Models},
  author={Eric Wallace and Jens Tuyls and Junlin Wang and Sanjay Subramanian and Matthew Gardner and Sameer Singh},
  booktitle={EMNLP/IJCNLP},
  year={2019}
}
Neural NLP models are increasingly accurate but are imperfect and opaque---they break in counterintuitive ways and leave end users puzzled at their behavior. Model interpretation methods ameliorate this opacity by providing explanations for specific model predictions. Unfortunately, existing interpretation codebases make it difficult to apply these methods to new models and tasks, which hinders adoption for practitioners and burdens interpretability researchers. We introduce AllenNLP Interpret… CONTINUE READING

Figures and Topics from this paper.

Explore Further: Topics Discussed in This Paper

References

Publications referenced by this paper.
SHOWING 1-10 OF 24 REFERENCES

Deep Contextualized Word Representations

VIEW 5 EXCERPTS
HIGHLY INFLUENTIAL

Pathologies of Neural Models Make Interpretation Difficult

VIEW 7 EXCERPTS
HIGHLY INFLUENTIAL

Pytorch CNN visualizations

  • Utku Ozbulak.
  • https://github.com/utkuozbulak/ pytorch-cnn-visualizations.
  • 2019