Visualizing and Understanding Neural Models in NLP

@inproceedings{Li2016VisualizingAU,
  title={Visualizing and Understanding Neural Models in NLP},
  author={J. Li and Xinlei Chen and E. Hovy and Dan Jurafsky},
  booktitle={HLT-NAACL},
  year={2016}
}
While neural networks have been successfully applied to many NLP tasks the resulting vector-based models are very difficult to interpret. [...] Key Method We first plot unit values to visualize compositionality of negation, intensification, and concessive clauses, allow us to see well-known markedness asymmetries in negation.Expand
427 Citations
Understanding the Representational Power of Neural Retrieval Models Using NLP Tasks
  • 6
  • Highly Influenced
  • PDF
Evaluating Recurrent Neural Network Explanations
  • 31
  • Highly Influenced
  • PDF
Understanding Neural Networks through Representation Erasure
  • 262
  • PDF
Recurrent Neural Networks
How LSTM Encodes Syntax: Exploring Context Vectors and Semi-Quantization on Natural Text
  • 2
  • PDF
Lingusitic Analysis of Multi-Modal Recurrent Neural Networks
  • 4
  • PDF
Sentence Ordering using Recurrent Neural Networks
  • 17
  • PDF
Explainable Deep Learning for Natural Language Processing
  • PDF
How recurrent networks implement contextual processing in sentiment analysis
  • 8
  • PDF
...
1
2
3
4
5
...

References

SHOWING 1-10 OF 34 REFERENCES
Recursive Deep Models for Semantic Compositionality Over a Sentiment Treebank
  • 4,316
  • PDF
Visualizing and Understanding Recurrent Networks
  • 806
  • PDF
Sequence to Sequence Learning with Neural Networks
  • 12,174
  • PDF
A Neural Conversational Model
  • 1,299
  • PDF
Addressing the Rare Word Problem in Neural Machine Translation
  • 617
  • PDF
A Compositional and Interpretable Semantic Space
  • 58
  • PDF
Intriguing properties of neural networks
  • 6,477
  • PDF
Sparse Overcomplete Word Vector Representations
  • 137
  • PDF
...
1
2
3
4
...