# Kernels for Vector-Valued Functions: a Review

@article{lvarez2012KernelsFV,
title={Kernels for Vector-Valued Functions: a Review},
author={Mauricio A {\'A}lvarez and Lorenzo Rosasco and Neil D. Lawrence},
journal={ArXiv},
year={2012},
volume={abs/1106.6251}
}
• Published 30 June 2011
• Computer Science
• ArXiv
Kernel methods are among the most popular techniques in machine learning. From a regularization perspective they play a central role in regularization theory as they provide a natural choice for the hypotheses space and the regularization functional through the notion of reproducing kernel Hilbert spaces. From a probabilistic perspective they are the key in the context of Gaussian processes, where the kernel function is known as the covariance function. Traditionally, kernel methods have been…
618 Citations

## Figures from this paper

### Online Learning with Multiple Operator-valued Kernels

• Computer Science
ArXiv
• 2013
Two online algorithms for learning a vector-valued function f while taking into account the output structure are described, one of which extends the standard kernel-based online learning algorithm NORMA from scalar-valued to operator-valued setting and the other addresses the limitation of pre-defining theoutput structure in ONORMA by learning sequentially a linear combination of operator- valued kernels.

### Random Fourier Features For Operator-Valued Kernels

• Computer Science, Mathematics
ACML
• 2016
A general principle for Operator-valued Random Fourier Feature construction relies on a generalization of Bochner's theorem for translation-invariant operator-valued Mercer kernels and proves the uniform convergence of the kernel approximation for bounded and unbounded operator random Fourier features.

### Multi-task Learning in Vector-valued Reproducing Kernel Banach Spaces with the $\ell^1$ Norm.

• Computer Science, Mathematics
• 2019
A class of vector-valued reproducing kernel Banach spaces with the $\ell^1$ norm is constructed so that the constructed spaces could have desirable properties including the crucial linear representer theorem.

### Kernel Mean Embedding of Distributions: A Review and Beyonds

• Computer Science
Found. Trends Mach. Learn.
• 2017
A comprehensive review of existing work and recent advances in the Hilbert space embedding of distributions, and to discuss the most challenging issues and open problems that could lead to new research directions.

### Operator-valued Kernels for Learning from Functional Response Data

• Computer Science
J. Mach. Learn. Res.
• 2016
In this paper we consider the problems of supervised classification and regression in the case where attributes and labels are functions: a data is represented by a set of functions, and the label is

### Learning with Operator-valued Kernels in Reproducing Kernel Krein Spaces

• Mathematics, Computer Science
NeurIPS
• 2020
This work considers operator-valued kernels which might not be necessarily positive definite, and an iterative Operator based Minimum Residual (OpMINRES) algorithm is proposed for solving the loss stabilization problem.

### Online Learning with Operator-valued Kernels

• Computer Science
ESANN
• 2015
An online algorithm, OLOK, is described that extends the standard kernel-based online learning algorithm NORMA from scalar-valued to operator-valued setting and reports a cumulative error bound that holds both for classification and regression.

### On the Dualization of Operator-Valued Kernel Machines

• Computer Science
ArXiv
• 2019
This work investigates how to use the duality principle to handle different families of loss functions, yet unexplored within vv-RKHSs.

### Nonlinear Functional Output Regression: a Dictionary Approach

• Computer Science
AISTATS
• 2021
In many fields, each data instance consists in a high number of measurements of the same underlying phenomenon. Such high dimensional data generally enjoys strong smoothness across features which can

## References

SHOWING 1-10 OF 115 REFERENCES

• Computer Science
NIPS
• 2004
This paper provides a foundation for multi-task learning using reproducing kernel Hilbert spaces of vector-valued functions using classes of matrix- valued kernels which are linear and are of the dot product or the translation invariant type.

• Computer Science
J. Mach. Learn. Res.
• 2008
The primary goal here is to derive conditions which ensure that the kernel K is universal, which means that on every compact subset of the input space, every continuous function with values in Y can be uniformly approximated by sections of the kernel.

### On Learning Vector-Valued Functions

• Mathematics, Computer Science
Neural Computation
• 2005
This letter provides a study of learning in a Hilbert space of vector-valued functions and derives the form of the minimal norm interpolant to a finite set of data and applies it to study some regularization functionals that are important in learning theory.

### Characterizing the Function Space for Bayesian Kernel Models

• Mathematics, Computer Science
J. Mach. Learn. Res.
• 2007
A coherent Bayesian kernel model based on an integral operator defined as the convolution of a kernel with a signed measure is studied, creating a function theoretic foundation for using non-parametric prior specifications in Bayesian modeling, such as Gaussian process and Dirichlet process prior distributions.

### Learning Multiple Tasks with Kernel Methods

• Computer Science
J. Mach. Learn. Res.
• 2005
The experiments show that learning multiple related tasks simultaneously using the proposed approach can significantly outperform standard single-task learning particularly when there are many related tasks but few data per task.

### Multi-output learning via spectral filtering

• Mathematics, Computer Science
Machine Learning
• 2012
A finite sample bound for the excess risk of the obtained estimator allows to prove consistency both for regression and multi-category classification and some promising results of the proposed algorithms are presented.

### Some Properties of Regularized Kernel Methods

• Mathematics, Computer Science
J. Mach. Learn. Res.
• 2004
A quantitative version of the representer theorem holding for both regression and classification, for both differentiable and non-differentiable loss functions, and for arbitrary offset terms is proved.

### Gaussian Processes for Machine Learning

• Computer Science
• 2009
The treatment is comprehensive and self-contained, targeted at researchers and students in machine learning and applied statistics, and deals with the supervised learning problem for both regression and classification.