• Corpus ID: 234742142

Universality and Optimality of Structured Deep Kernel Networks

  title={Universality and Optimality of Structured Deep Kernel Networks},
  author={Tizian Wenzel and Gabriele Santin and Bernard Haasdonk},
Kernel based methods yield approximation models that are flexible, efficient and powerful. In particular, they utilize fixed feature maps of the data, being often associated to strong analytical results that prove their accuracy. On the other hand, the recent success of machine learning methods has been driven by deep neural networks (NNs). They achieve a significant accuracy on very high-dimensional data, in that they are able to learn also efficient data representations or data-based feature… 

Figures and Tables from this paper



Deep Kernel Learning

We introduce scalable deep kernels, which combine the structural properties of deep learning architectures with the non-parametric flexibility of kernel methods. Specifically, we transform the inputs

Deep Spectral Kernel Learning

A novel deep spectral kernel network (DSKN) is proposed to naturally integrate nonstationary and non-monotonic spectral kernels into elegant deep architectures in an interpretable way, which can be further generalized to cover most kernels.

Deep Kernel: Learning Kernel Function from Data Using Deep Neural Network

The experimental results show that the proposed deep kernel method outperforms the traditional methods with Gaussian kernels on most of the data sets and is shown to be more powerful in dimension reduction and visualization than the RBF kernel.

Deep Neural Network Approximation Theory

Deep networks provide exponential approximation accuracy—i.e., the approximation error decays exponentially in the number of nonzero weights in the network— of the multiplication operation, polynomials, sinusoidal functions, and certain smooth functions.

A representer theorem for deep kernel learning

This paper provides a representer theorem for the concatenation of (linear combinations of) kernel functions of reproducing kernel Hilbert spaces and shows how concatenated machine learning problems can be reformulated as neural networks and how this result applies to a broad class of state-of-the-art deep learning methods.

Kernel Methods for Deep Learning

A new family of positive-definite kernel functions that mimic the computation in large, multilayer neural nets are introduced that can be used in shallow architectures, such as support vector machines (SVMs), or in deep kernel-based architectures that the authors call multilayers kernel machines (MKMs).

Kernel Methods for Surrogate Modeling

This chapter deals with kernel methods as a special class of techniques for surrogate modeling, which are meshless, do not require or depend on a grid, hence are less prone to the curse of dimensionality, even for high-dimensional problems.

Kernel Flows: from learning kernels from data into the abyss

Diving into the shallows: a computational perspective on large-scale shallow learning

EigenPro iteration is introduced, based on a preconditioning scheme using a small number of approximately computed eigenvectors, which turns out that injecting this small (computationally inexpensive and SGD-compatible) amount of approximate second-order information leads to major improvements in convergence.