Corpus ID: 218763259

Text-to-Text Pre-Training for Data-to-Text Tasks

@article{Kale2020TexttoTextPF,
  title={Text-to-Text Pre-Training for Data-to-Text Tasks},
  author={Mihir Kale},
  journal={ArXiv},
  year={2020},
  volume={abs/2005.10433}
}
  • Mihir Kale
  • Published 2020
  • Computer Science
  • ArXiv
  • We study the pre-train + fine-tune strategy for data-to-text tasks. Fine-tuning T5 achieves state-of-the-art results on the WebNLG, MultiWoz and ToTTo benchmarks. Moreover, the models are fully end-to-end and do not rely on any intermediate planning steps, delexicalization or copy mechanisms. T5 pre-training also enables stringer generalization, as evidenced by large improvements on out-of-domain test sets. We hope our work serves as a useful baseline for future research, as pre-training… CONTINUE READING

    Figures, Tables, and Topics from this paper.

    Explore Further: Topics Discussed in This Paper

    Citations

    Publications citing this paper.

    References

    Publications referenced by this paper.
    SHOWING 1-10 OF 32 REFERENCES

    Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer

    VIEW 6 EXCERPTS
    HIGHLY INFLUENTIAL

    Leveraging Pre-trained Checkpoints for Sequence Generation Tasks

    VIEW 2 EXCERPTS