Corpus ID: 203593625

Chameleon: Learning Model Initializations Across Tasks With Different Schemas

@article{Brinkmeyer2019ChameleonLM,
  title={Chameleon: Learning Model Initializations Across Tasks With Different Schemas},
  author={Lukas Brinkmeyer and Rafael R{\^e}go Drumond and Randolf Scholz and Josif Grabocka and Lars Schmidt-Thieme},
  journal={ArXiv},
  year={2019},
  volume={abs/1909.13576}
}
  • Lukas Brinkmeyer, Rafael Rêgo Drumond, +2 authors Lars Schmidt-Thieme
  • Published in ArXiv 2019
  • Mathematics, Computer Science
  • Parametric models, and particularly neural networks, require weight initialization as a starting point for gradient-based optimization. In most current practices, this is accomplished by using some form of random initialization. Instead, recent work shows that a specific initial parameter set can be learned from a population of tasks, i.e., dataset and target variable for supervised learning tasks. Using this initial parameter set leads to faster convergence for new tasks (model-agnostic meta… CONTINUE READING

    Citations

    Publications citing this paper.

    References

    Publications referenced by this paper.
    SHOWING 1-10 OF 33 REFERENCES

    On First-Order Meta-Learning Algorithms

    VIEW 6 EXCERPTS
    HIGHLY INFLUENTIAL

    Supervised reptile

    • A. Nichol, J. Achiam, J. Schulman
    • https://github.com/openai/supervisedreptile.
    • 2018
    VIEW 6 EXCERPTS
    HIGHLY INFLUENTIAL

    Adam: A Method for Stochastic Optimization

    VIEW 2 EXCERPTS
    HIGHLY INFLUENTIAL

    Learning Transferable Architectures for Scalable Image Recognition

    VIEW 1 EXCERPT

    Learning to Compare: Relation Network for Few-Shot Learning

    VIEW 1 EXCERPT

    Local Deep-Feature Alignment for Unsupervised Dimension Reduction

    VIEW 1 EXCERPT

    Multi-task Learning Using Uncertainty to Weigh Losses for Scene Geometry and Semantics

    VIEW 1 EXCERPT