Corpus ID: 226226488

On Optimality of Meta-Learning in Fixed-Design Regression with Weighted Biased Regularization

@article{Konobeev2020OnOO,
  title={On Optimality of Meta-Learning in Fixed-Design Regression with Weighted Biased Regularization},
  author={Mikhail Konobeev and Ilja Kuzborskij and Csaba Szepesvari},
  journal={ArXiv},
  year={2020},
  volume={abs/2011.00344}
}
We consider a fixed-design linear regression in the meta-learning model of Baxter (2000) and establish a problem-dependent finite-sample lower bound on the transfer risk (risk on a newly observed task) valid for all estimators. Moreover, we prove that a weighted form of a biased regularization - a popular technique in transfer and meta-learning - is optimal, i.e. it enjoys a problem-dependent upper bound on the risk matching our lower bound up to a constant. Thus, our bounds characterize meta… Expand
3 Citations

Figures from this paper

Meta-strategy for Learning Tuning Parameters with Guarantees
  • PDF
How Fine-Tuning Allows for Effective Meta-Learning
  • Highly Influenced
  • PDF
A Theorem of the Alternative for Personalized Federated Learning
  • PDF

References

SHOWING 1-10 OF 37 REFERENCES
Learning-to-Learn Stochastic Gradient Descent with Biased Regularization
  • 45
  • PDF
Online-Within-Online Meta-Learning
  • 15
  • PDF
Learning To Learn Around A Common Mean
  • 29
  • Highly Influential
  • PDF
Meta-Learning by Adjusting Priors Based on Extended PAC-Bayes Theory
  • 71
  • PDF
On the Convergence Theory of Gradient-Based Model-Agnostic Meta-Learning Algorithms
  • 59
  • PDF
Provable Guarantees for Gradient-Based Meta-Learning
  • 50
  • Highly Influential
  • PDF
Theoretical bounds on estimation error for meta-learning
  • 5
  • PDF
Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks
  • 3,207
  • Highly Influential
  • PDF
Regret Bounds for Lifelong Learning
  • 34
  • PDF
...
1
2
3
4
...