Corpus ID: 222291663

How Important is the Train-Validation Split in Meta-Learning?

@article{Bai2020HowII,
  title={How Important is the Train-Validation Split in Meta-Learning?},
  author={Yu Bai and M. Chen and Pan Zhou and Tuo Zhao and J. Lee and Sham M. Kakade and Haiquan Wang and Caiming Xiong},
  journal={ArXiv},
  year={2020},
  volume={abs/2010.05843}
}
  • Yu Bai, M. Chen, +5 authors Caiming Xiong
  • Published 2020
  • Computer Science, Mathematics
  • ArXiv
  • Meta-learning aims to perform fast adaptation on a new task through learning a "prior" from multiple existing tasks. A common practice in meta-learning is to perform a train-validation split where the prior adapts to the task on one split of the data, and the resulting predictor is evaluated on another split. Despite its prevalence, the importance of the train-validation split is not well understood either in theory or in practice, particularly in comparison to the more direct non-splitting… CONTINUE READING
    1 Citations

    Figures and Tables from this paper.

    References

    SHOWING 1-10 OF 48 REFERENCES
    Provable Meta-Learning of Linear Representations
    • 14
    • PDF
    Learning To Learn Around A Common Mean
    • 21
    • Highly Influential
    • PDF
    Efficient Meta Learning via Minibatch Proximal Update
    • 20
    • PDF
    Incremental Learning-to-Learn with Statistical Guarantees
    • 22
    • Highly Influential
    • PDF
    Guarantees for Tuning the Step Size using a Learning-to-Learn Approach
    • 2
    • PDF
    A Sample Complexity Separation between Non-Convex and Convex Meta-Learning
    • 4
    • PDF
    On First-Order Meta-Learning Algorithms
    • 411
    • PDF
    Rapid Learning or Feature Reuse? Towards Understanding the Effectiveness of MAML
    • 78
    • PDF
    Meta-Learning for Semi-Supervised Few-Shot Classification
    • 298
    • PDF