Learn More
The paper introduces some generalizations of Vapnik's method of structural risk min-imisation (SRM). As well as making explicit some of the details on SRM, it provides a result that allows one to trade off errors on the training sample against improved generalization performance. It then considers the more general case when the hierarchy of classes is(More)
Some recent work [7, 14, 15] in computational learning theory has discussed learning in situations where the teacher is helpful, and can choose to present carefully chosen sequences of labelled examples to the learner. We say a function <italic>t</italic> in a set <italic>H</italic> of functions (a hypothesis space) defined on a set <italic>X</italic> is(More)
In this paper we investigate a parameter defined for any graph, known as the Vapnik-Chervonenkis dimension (or VC dimension). For any vertex x of a graph G, the closed neighbourhood N (x) of x is the set of all vertices of G adjacent to x, together with x. We say that a set D of vertices of G is shattered if every subset R of D can be realised as R = D ∩ N(More)
This paper surveys certain developments in the use of probabilistic techniques for the modelling of generalization in machine learning. Building on 'uniform convergence' results in probability theory, a number of approaches to the problem of quantifying generalization have been developed in recent years. Initially these models addressed binary(More)
The paper introduces a framework for studying structural risk minimisation. The model views structural risk minimisation in a PAC context. It then considers the more general case when the hierarchy of classes is chosen in response to the data. This theoretically explains the impressive performance of the maximal margin hyperplane algorithm of Vapnik. It may(More)
In this paper, we analyze Boolean functions using a recently proposed measure of their complexity. This complexity measure, motivated by the aim of relating the complexity of the functions with the generalization ability that can be obtained when the functions are implemented in feed-forward neural networks, is the sum of a number of components. We(More)
This report surveys some connections between Boolean functions and artificial neural networks. The focus is on cases in which the individual neurons are linear threshold neu-rons, sigmoid neurons, polynomial threshold neurons, or spiking neurons. We explore the relationships between types of artificial neural network and classes of Boolean function. In(More)