# Adaptive Deep Kernel Learning

@article{Tossou2019AdaptiveDK, title={Adaptive Deep Kernel Learning}, author={Prudencio Tossou and Basile Dura and François Laviolette and Mario Marchand and Alexandre Lacoste}, journal={ArXiv}, year={2019}, volume={abs/1905.12131} }

Deep kernel learning provides an elegant and principled framework for combining the structural properties of deep learning algorithms with the flexibility of kernel methods. By means of a deep neural network, it consists of learning a kernel operator which is combined with a differentiable kernel algorithm for inference. While previous work within this framework has mostly explored learning a single kernel for large datasets, we focus herein on learning a kernel family for a variety of tasks in…

#### Figures, Tables, and Topics from this paper

#### 16 Citations

Kernel Continual Learning

- Computer ScienceICML
- 2021

Kernel continual learning is introduced, a simple but effective variant of continual learning that leverages the non-parametric nature of kernel methods to tackle catastrophic forgetting and can be reduced to achieve more compact memory, resulting in more efficient continual learning based on episodic memory.

Deep Kernel Transfer in Gaussian Processes for Few-shot Learning

- Computer Science, MathematicsArXiv
- 2019

This work proposes a simple, yet effective variant of deep kernel learning in which the kernel is transferred across tasks, which it is called deep kernel transfer, and demonstrates that the proposed method outperforms several state-of-the-art algorithms in few-shot regression, classification, and cross-domain adaptation.

End-to-End Learning of Deep Kernel Acquisition Functions for Bayesian Optimization

- Computer Science, MathematicsArXiv
- 2021

This paper proposes a meta-learning method for Bayesian optimization with neural network-based kernels that minimizes the expected gap between the true optimum value and the best value found by BO.

Local Nonparametric Meta-Learning

- Computer Science, MathematicsArXiv
- 2020

It is shown that global, fixed-size representations often fail when confronted with certain types of out-of-distribution tasks, even when the same inductive bias is appropriate, and a novel nonparametric meta-learning algorithm is proposed that utilizes a meta-trained local learning rule.

MATE: Plugging in Model Awareness to Task Embedding for Meta Learning

- Computer ScienceNeurIPS
- 2020

A novel task representation called model-aware task embedding (MATE) is proposed that incorporates not only the data distributions of different tasks, but also the complexity of the tasks through the models used.

Few-shot Learning for Spatial Regression

- Computer Science, MathematicsArXiv
- 2020

This work proposes a few-shot learning method for spatial regression that achieves better predictive performance than existing meta-learning methods using spatial datasets and uses Gaussian processes to train the model efficiently and effectively.

Deep Gaussian Processes for Few-Shot Segmentation

- Computer ScienceArXiv
- 2021

This work proposes a few-shot learner formulation based on Gaussian process (GP) regression that sets a new state-of-theart for 5-shot segmentation, with mIoU scores of 68.1 and 49.8 on PASCAL-5 and COCO-20, respectively.

Non-Gaussian Gaussian Processes for Few-Shot Regression

- Computer ScienceArXiv
- 2021

This work addresses the limitation of GPs by leveraging the flexibility of Normalizing Flows to modulate the posterior predictive distribution of the GP, and proposes an invertible ODE-based mapping that operates on each component of the random variable vectors and shares the parameters across all of them.

GP-ConvCNP: Better Generalization for Convolutional Conditional Neural Processes on Time Series Data

- Computer Science, MathematicsArXiv
- 2021

This work incorporates a Gaussian Process into the model of Convolutional Conditional Neural Processes, a recent addition to this family of conditional generative models, and reintroduces the possibility to sample from the model, a key feature of other members in the NP family.

The Internet of Federated Things (IoFT)

- Computer ScienceIEEE Access
- 2021

The Internet of Things (IoT) is on the verge of a major paradigm shift. In the IoT system of the future, IoFT, the “cloud” will be substituted by the “crowd” where model training is brought to the…

#### References

SHOWING 1-10 OF 32 REFERENCES

Deep Kernel Learning

- Computer Science, MathematicsAISTATS
- 2016

We introduce scalable deep kernels, which combine the structural properties of deep learning architectures with the non-parametric flexibility of kernel methods. Specifically, we transform the inputs…

Finite Rank Deep Kernel Learning

- 2018

Deep kernel learning has emerged as a principled framework to simultaneously forecast and characterize the extent of uncertainty in the forecast in regression problems. Deep kernel learning uses a…

Meta-learning with differentiable closed-form solvers

- Computer Science, MathematicsICLR
- 2019

The main idea is to teach a deep network to use standard machine learning tools, such as ridge regression, as part of its own internal model, enabling it to quickly adapt to novel data.

Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks

- Computer ScienceICML
- 2017

We propose an algorithm for meta-learning that is model-agnostic, in the sense that it is compatible with any model trained with gradient descent and applicable to a variety of different learning…

Siamese Neural Networks for One-Shot Image Recognition

- Computer Science
- 2015

A method for learning siamese neural networks which employ a unique structure to naturally rank similarity between inputs and is able to achieve strong results which exceed those of other deep learning models with near state-of-the-art performance on one-shot classification tasks.

Matching Networks for One Shot Learning

- Computer Science, MathematicsNIPS
- 2016

This work employs ideas from metric learning based on deep neural features and from recent advances that augment neural networks with external memories to learn a network that maps a small labelled support set and an unlabelled example to its label, obviating the need for fine-tuning to adapt to new class types.

Conditional Neural Processes

- Computer Science, MathematicsICML
- 2018

Conditional Neural Processes are inspired by the flexibility of stochastic processes such as GPs, but are structured as neural networks and trained via gradient descent, yet scale to complex functions and large datasets.

Deep Sets

- Computer Science, MathematicsNIPS
- 2017

The main theorem characterizes the permutation invariant objective functions and provides a family of functions to which any permutation covariant objective function must belong, which enables the design of a deep network architecture that can operate on sets and which can be deployed on a variety of scenarios including both unsupervised and supervised learning tasks.

Learning to Compare: Relation Network for Few-Shot Learning

- Computer Science2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition
- 2018

A conceptually simple, flexible, and general framework for few-shot learning, where a classifier must learn to recognise new classes given only few examples from each, which is easily extended to zero- shot learning.