Seq2seq-Vis: A Visual Debugging Tool for Sequence-to-Sequence Models

@article{Strobelt2019Seq2seqVisAV,
  title={Seq2seq-Vis: A Visual Debugging Tool for Sequence-to-Sequence Models},
  author={Hendrik Strobelt and Sebastian Gehrmann and Michael Behrisch and Adam Perer and H. Pfister and Alexander M. Rush},
  journal={IEEE Transactions on Visualization and Computer Graphics},
  year={2019},
  volume={25},
  pages={353-363}
}
  • Hendrik Strobelt, Sebastian Gehrmann, +3 authors Alexander M. Rush
  • Published 2019
  • Computer Science, Medicine
  • IEEE Transactions on Visualization and Computer Graphics
  • Neural sequence-to-sequence models have proven to be accurate and robust for many sequence prediction tasks, and have become the standard approach for automatic translation of text. The models work with a five-stage blackbox pipeline that begins with encoding a source sequence to a vector space and then decoding out to a new target sequence. This process is now standard, but like many deep learning methods remains quite difficult to understand or debug. In this work, we present a visual… CONTINUE READING
    88 Citations
    Debugging Sequence-to-Sequence Models with Seq2Seq-Vis
    • 5
    • PDF
    A Gray Box Interpretable Visual Debugging Approach for Deep Sequence Learning Model
    • PDF
    ProtoSteer: Steering Deep Sequence Model with Prototypes
    • 7
    AttViz: Online exploration of self-attention for transparent neural language modeling
    • PDF
    Ablate, Variate, and Contemplate: Visual Analytics for Discovering Neural Architectures
    • 6
    • PDF
    Understanding and Improving Hidden Representations for Neural Machine Translation
    VizSeq: A Visual Analysis Toolkit for Text Generation Tasks
    • 10
    • PDF
    An Analysis of Encoder Representations in Transformer-Based Machine Translation
    • 97
    • PDF

    References

    SHOWING 1-10 OF 55 REFERENCES
    Sequence to Sequence Learning with Neural Networks
    • 11,464
    • PDF
    Get To The Point: Summarization with Pointer-Generator Networks
    • 1,553
    • PDF
    LSTMVis: A Tool for Visual Analysis of Hidden State Dynamics in Recurrent Neural Networks
    • 191
    Convolutional Sequence to Sequence Learning
    • 1,767
    • Highly Influential
    • PDF
    A causal framework for explaining the predictions of black-box sequence-to-sequence models
    • 101
    • PDF
    Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation
    • 3,221
    • PDF
    Abstractive Text Summarization using Sequence-to-sequence RNNs and Beyond
    • 1,080
    • PDF
    Attention is All you Need
    • 15,021
    • PDF
    A Deep Reinforced Model for Abstractive Summarization
    • 727
    • PDF
    A Convolutional Encoder Model for Neural Machine Translation
    • 260
    • Highly Influential
    • PDF