Corpus ID: 221135886

Language Models as Few-Shot Learner for Task-Oriented Dialogue Systems

@article{Madotto2020LanguageMA,
  title={Language Models as Few-Shot Learner for Task-Oriented Dialogue Systems},
  author={Andrea Madotto and Zihan Liu},
  journal={ArXiv},
  year={2020},
  volume={abs/2008.06239}
}
  • Andrea Madotto, Zihan Liu
  • Published 2020
  • Computer Science
  • ArXiv
  • Task-oriented dialogue systems use four connected modules, namely, Natural Language Understanding (NLU), a Dialogue State Tracking (DST), Dialogue Policy (DP) and Natural Language Generation (NLG). A research challenge is to learn each module with the least amount of samples (i.e., few-shots) given the high cost related to the data collection. The most common and effective technique to solve this problem is transfer learning, where large language models, either pre-trained on text or task… CONTINUE READING
    3 Citations

    Figures and Tables from this paper

    References

    SHOWING 1-10 OF 26 REFERENCES
    Few-shot Natural Language Generation for Task-Oriented Dialog
    • 30
    • Highly Influential
    • PDF
    Meta-Learning for Low-resource Natural Language Generation in Task-oriented Dialogue Systems
    • 38
    • PDF
    ToD-BERT: Pre-trained Natural Language Understanding for Task-Oriented Dialogues
    • 24
    • Highly Influential
    • PDF
    Language Models are Unsupervised Multitask Learners
    • 2,462
    • Highly Influential
    • PDF
    SOLOIST: Few-shot Task-Oriented Dialog with A Single Pre-trained Auto-regressive Model
    • 29
    • Highly Influential
    • PDF
    Template Guided Text Generation for Task-Oriented Dialogue
    • 3
    • PDF
    Meta Dialogue Policy Learning
    • 1
    • PDF
    Language Models are Few-Shot Learners
    • 693
    • Highly Influential
    • PDF
    MultiWOZ 2.2 : A Dialogue Dataset with Additional Annotation Corrections and State Tracking Baselines
    • 13
    • PDF