• Corpus ID: 15519618

VC dimension of neural networks

@inproceedings{Sontag1998VCDO,
  title={VC dimension of neural networks},
  author={Eduardo Sontag},
  year={1998}
}
This chapter presents a brief introduction to Vapnik-Chervonenkis (VC) dimension, a quantity which characterizes the difficulty of distribution-independent learning. The chapter establishes various elementary results, and discusses how to estimate the VC dimension in several examples of interest in neural network theory. 

Figures from this paper

On sample complexity of neural networks
TLDR
This work considers functions defined by deep neural networks as definable objects in an o-miminal expansion of the real field, and derives an almost linear bound on sample complexity of such networks.
VC‐dimension on manifolds: a first approach
The Vapnik–Chervonenkis‐dimension is an index of the capacity of a learning machine. It has been computed in several cases, but always in a Euclidean context. This paper extends the notion to
Tangent Space Separability in Feedforward Neural Networks
TLDR
By approximating the tangent subspace, this work suggests a sparse representation that enables switching to shallow networks, GradNet after a very early training stage, and shows that the proposed approximation of the metric improves and sometimes even exceeds the achievable performance of the original network significantly.
Effect of Various Regularizers on Model Complexities of Neural Networks in Presence of Input Noise
TLDR
The effects of various regularization schemes on the complexity of a neural network which is characterize with the loss, $L_2$ norm of the weights, Rademacher complexities (Directly Approximately Regularizing Complexity-DARC1), VC dimension based Low Complexity Neural Network (LCNN) when subject to varying degrees of Gaussian input noise is analyzed.
Mathematical Aspects of Neural Networks
TLDR
This tutorial paper follows the dichotomy offered by the overall network structure and restricts ourselves to feedforward net- works, recurrent networks, and self-organizing neural systems, respectively.
Learning distinct features helps, provably
TLDR
This work theoretically investigates how learning non-redundant distinct features affects the performance of the network and derives novel generalization bounds depending on feature diversity based on Rademacher complexity for two-layer neural networks with least squares loss.
A SURVEY OF DISCRETE MATHEMATICS IN MACHINE LEARNING
TLDR
This paper will review fundamental techniques such as Markov chains and graph searching, as well as introduce advanced concepts such as VC-Dimension, Epsilon-Nets, and Hidden Markov Models.
Expressive power of outer product manifolds on feed-forward neural networks
TLDR
The main idea is to mathematically understand and describe the hierarchical structure of feedforward neural networks by reparametrization invariant Riemannian metrics by computing or approximating the tangent subspace to better utilize the original network via sparse representations that enables switching to shallow networks after a very early training stage.
Statistical Learning Theory
This chapter presents an overview of statistical learning theory, and describes key results regarding uniform convergence of empirical means and related sample complexity. This theory provides a
...
...

References

SHOWING 1-10 OF 58 REFERENCES
Neural Networks with Quadratic VC Dimension
TLDR
It is shown that neural networks which use continuous activation functions have VC dimension at least as large as the square of the number of weightsw, which settles a long-standing open question.
Neural Networks with Quadratic VCDimension 1
This paper shows that neural networks which use continuous activation functions have VC dimension at least as large as the square of the number of weights w. This result settles a long-standing open
Perspectives of Current Research about the Complexity of Learning on Neural Nets
This chapter discusses within the framework of computational learning theory the current state of knowledge and some open problems in three areas of research about learning on feedforward neural
Polynomial bounds for VC dimension of sigmoidal neural networks
TLDR
A new method is introduced for proving explicit upper bounds on the VC Dimension of general functional basis networks, and theVC Dimension of analog neural networks with the sigmoid activation function o(y) = 1/1 + e-y to be bounded by a quadratic polynomial in the number of programmable parameters.
Sample complexity for learning recurrent perceptron mappings
TLDR
This paper provides tight bounds on sample complexity associated to the fitting of recurrent perceptron classifiers to experimental data.
A Theory of Learning and Generalization: With Applications to Neural Networks and Control Systems
TLDR
This paper presents Vapnik-Chervonenkis and Pollard (Pseudo-) Dimensions, a model of learning based on uniform Convergence of Empirical Means, and applications to Neural Networks and Control Systems, and some Open Problems.
Polynomial Bounds for VC Dimension of Sigmoidal and General Pfaffian Neural Networks
We introduce a new method for proving explicit upper bounds on the VC dimension of general functional basis networks and prove as an application, for the first time, that the VC dimension of analog
Feedforward Nets for Interpolation and Classification
Vapnik-Chervonenkis Dimension of Recurrent Neural Networks
Recurrent Neural Networks: Some Systems-Theoretic Aspects
This paper provides an exposition of some recent research regarding system-theoretic aspects of continuous-time recurrent (dynamic) neural networks with sigmoidal activation functions. The class of
...
...