Corpus ID: 219956260

Differentiable Language Model Adversarial Attacks on Categorical Sequence Classifiers

@article{Fursov2020DifferentiableLM,
  title={Differentiable Language Model Adversarial Attacks on Categorical Sequence Classifiers},
  author={Ivan Fursov and A. Zaytsev and N. Kluchnikov and A. Kravchenko and Evgeniy Burnaev},
  journal={ArXiv},
  year={2020},
  volume={abs/2006.11078}
}
  • Ivan Fursov, A. Zaytsev, +2 authors Evgeniy Burnaev
  • Published 2020
  • Computer Science, Mathematics
  • ArXiv
  • An adversarial attack paradigm explores various scenarios for the vulnerability of deep learning models: minor changes of the input can force a model failure. Most of the state of the art frameworks focus on adversarial attacks for images and other structured model inputs, but not for categorical sequences models. Successful attacks on classifiers of categorical sequences are challenging because the model input is tokens from finite sets, so a classifier score is non-differentiable with respect… CONTINUE READING

    References

    Publications referenced by this paper.
    SHOWING 1-10 OF 44 REFERENCES

    DeepFool: A Simple and Accurate Method to Fool Deep Neural Networks

    VIEW 5 EXCERPTS
    HIGHLY INFLUENTIAL

    Attention is All you Need

    VIEW 6 EXCERPTS
    HIGHLY INFLUENTIAL

    The TREC-8 Question Answering Track Evaluation

    VIEW 4 EXCERPTS
    HIGHLY INFLUENTIAL

    A survey on Adversarial Attacks and Defenses in Text

    VIEW 12 EXCERPTS
    HIGHLY INFLUENTIAL

    Adversarial Attacks on Deep-learning Models in Natural Language Processing

    VIEW 12 EXCERPTS
    HIGHLY INFLUENTIAL

    Adam: A Method for Stochastic Optimization

    VIEW 1 EXCERPT