Seq2seq-Vis: A Visual Debugging Tool for Sequence-to-Sequence Models

  title={Seq2seq-Vis: A Visual Debugging Tool for Sequence-to-Sequence Models},
  author={Hendrik Strobelt and Sebastian Gehrmann and Michael Behrisch and Adam Perer and H. Pfister and Alexander M. Rush},
  journal={IEEE Transactions on Visualization and Computer Graphics},
  • Hendrik Strobelt, Sebastian Gehrmann, +3 authors Alexander M. Rush
  • Published 2019
  • Computer Science, Medicine
  • IEEE Transactions on Visualization and Computer Graphics
  • Neural sequence-to-sequence models have proven to be accurate and robust for many sequence prediction tasks, and have become the standard approach for automatic translation of text. The models work with a five-stage blackbox pipeline that begins with encoding a source sequence to a vector space and then decoding out to a new target sequence. This process is now standard, but like many deep learning methods remains quite difficult to understand or debug. In this work, we present a visual… CONTINUE READING
    89 Citations
    Debugging Sequence-to-Sequence Models with Seq2Seq-Vis
    • 5
    • PDF
    A Gray Box Interpretable Visual Debugging Approach for Deep Sequence Learning Model
    • PDF
    ProtoSteer: Steering Deep Sequence Model with Prototypes
    • 8
    AttViz: Online exploration of self-attention for transparent neural language modeling
    • PDF
    Ablate, Variate, and Contemplate: Visual Analytics for Discovering Neural Architectures
    • 6
    • PDF
    Understanding and Improving Hidden Representations for Neural Machine Translation
    • PDF
    VizSeq: A Visual Analysis Toolkit for Text Generation Tasks
    • 10
    • PDF
    An Analysis of Encoder Representations in Transformer-Based Machine Translation
    • 100
    • PDF


    Sequence to Sequence Learning with Neural Networks
    • 11,691
    • PDF
    Get To The Point: Summarization with Pointer-Generator Networks
    • 1,599
    • PDF
    LSTMVis: A Tool for Visual Analysis of Hidden State Dynamics in Recurrent Neural Networks
    • 193
    Convolutional Sequence to Sequence Learning
    • 1,821
    • Highly Influential
    • PDF
    A causal framework for explaining the predictions of black-box sequence-to-sequence models
    • 104
    • PDF
    Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation
    • 3,303
    • PDF
    Abstractive Text Summarization using Sequence-to-sequence RNNs and Beyond
    • 1,104
    • PDF
    Attention is All you Need
    • 15,851
    • PDF
    A Deep Reinforced Model for Abstractive Summarization
    • 746
    • PDF
    A Convolutional Encoder Model for Neural Machine Translation
    • 267
    • Highly Influential
    • PDF