Corpus ID: 209475786

FreeLB: Enhanced Adversarial Training for Natural Language Understanding.

@inproceedings{Zhu2019FreeLBEA,
  title={FreeLB: Enhanced Adversarial Training for Natural Language Understanding.},
  author={Chen Zhu and Yu Cheng and Zhe Gan and Siqi Sun and Tom Goldstein and Jing-jing Liu},
  booktitle={ICLR 2019},
  year={2019}
}
  • Chen Zhu, Yu Cheng, +3 authors Jing-jing Liu
  • Published in ICLR 2019
  • Computer Science
  • Adversarial training, which minimizes the maximal risk for label-preserving input perturbations, has proved to be effective for improving the generalization of language models. In this work, we propose a novel adversarial training algorithm, FreeLB, that promotes higher invariance in the embedding space, by adding adversarial perturbations to word embeddings and minimizing the resultant adversarial risk inside different regions around input samples. To validate the effectiveness of the proposed… CONTINUE READING

    Create an AI-powered research feed to stay up to date with new papers like this posted to ArXiv

    References

    Publications referenced by this paper.
    SHOWING 1-10 OF 47 REFERENCES

    Adversarial Training for Free!

    VIEW 8 EXCERPTS
    HIGHLY INFLUENTIAL

    Generalization Error of Invariant Classifiers

    VIEW 5 EXCERPTS
    HIGHLY INFLUENTIAL