• Published 2019

FREELB: ENHANCED ADVERSARIAL TRAINING FOR NATURAL LANGUAGE UNDERSTANDING

@inproceedings{2019FREELBEA,
  title={FREELB: ENHANCED ADVERSARIAL TRAINING FOR NATURAL LANGUAGE UNDERSTANDING},
  author={},
  year={2019}
}
    Adversarial training, which minimizes the maximal risk for label-preserving input perturbations, has proved to be effective for improving the generalization of language models. In this work, we propose a novel adversarial training algorithm, FreeLB, that promotes higher robustness and invariance in the embedding space, by adding adversarial perturbations to word embeddings and minimizing the resultant adversarial risk inside different regions around input samples. To validate the effectiveness… CONTINUE READING

    References

    Publications referenced by this paper.
    SHOWING 1-10 OF 47 REFERENCES

    Adversarial Training for Free!

    VIEW 8 EXCERPTS
    HIGHLY INFLUENTIAL

    Generalization Error of Invariant Classifiers

    VIEW 5 EXCERPTS
    HIGHLY INFLUENTIAL