• Corpus ID: 234742142

Universality and Optimality of Structured Deep Kernel Networks

@article{Wenzel2021UniversalityAO,
  title={Universality and Optimality of Structured Deep Kernel Networks},
  author={Tizian Wenzel and Gabriele Santin and Bernard Haasdonk},
  journal={ArXiv},
  year={2021},
  volume={abs/2105.07228}
}
Kernel based methods yield approximation models that are flexible, efficient and powerful. In particular, they utilize fixed feature maps of the data, being often associated to strong analytical results that prove their accuracy. On the other hand, the recent success of machine learning methods has been driven by deep neural networks (NNs). They achieve a significant accuracy on very high-dimensional data, in that they are able to learn also efficient data representations or data-based feature… 

Figures and Tables from this paper

References

SHOWING 1-10 OF 41 REFERENCES

Deep Kernel Learning

We introduce scalable deep kernels, which combine the structural properties of deep learning architectures with the non-parametric flexibility of kernel methods. Specifically, we transform the inputs

Deep Spectral Kernel Learning

A novel deep spectral kernel network (DSKN) is proposed to naturally integrate nonstationary and non-monotonic spectral kernels into elegant deep architectures in an interpretable way, which can be further generalized to cover most kernels.

Deep Kernel: Learning Kernel Function from Data Using Deep Neural Network

The experimental results show that the proposed deep kernel method outperforms the traditional methods with Gaussian kernels on most of the data sets and is shown to be more powerful in dimension reduction and visualization than the RBF kernel.

Deep Neural Network Approximation Theory

Deep networks provide exponential approximation accuracy—i.e., the approximation error decays exponentially in the number of nonzero weights in the network— of the multiplication operation, polynomials, sinusoidal functions, and certain smooth functions.

A representer theorem for deep kernel learning

This paper provides a representer theorem for the concatenation of (linear combinations of) kernel functions of reproducing kernel Hilbert spaces and shows how concatenated machine learning problems can be reformulated as neural networks and how this result applies to a broad class of state-of-the-art deep learning methods.

Learning and Generalization in Overparameterized Neural Networks, Going Beyond Two Layers

It is proved that overparameterized neural networks can learn some notable concept classes, including two and three-layer networks with fewer parameters and smooth activations, and SGD (stochastic gradient descent) or its variants in polynomial time using polynomially many samples.

Kernel Methods for Deep Learning

A new family of positive-definite kernel functions that mimic the computation in large, multilayer neural nets are introduced that can be used in shallow architectures, such as support vector machines (SVMs), or in deep kernel-based architectures that the authors call multilayers kernel machines (MKMs).

A representer theorem for deep neural networks

  • M. Unser
  • Computer Science
    J. Mach. Learn. Res.
  • 2019
A general representer theorem for deep neural networks is derived that makes a direct connection with splines and sparsity, and it is shown that the optimal network configuration can be achieved with activation functions that are nonuniform linear splines with adaptive knots.

Kernel Methods for Surrogate Modeling

This chapter deals with kernel methods as a special class of techniques for surrogate modeling, which are meshless, do not require or depend on a grid, hence are less prone to the curse of dimensionality, even for high-dimensional problems.