#### Filter Results:

#### Publication Year

1990

2016

#### Co-author

#### Key Phrase

#### Publication Venue

Learn More

The paper introduces some generalizations of Vapnik's method of structural risk min-imisation (SRM). As well as making explicit some of the details on SRM, it provides a result that allows one to trade off errors on the training sample against improved generalization performance. It then considers the more general case when the hierarchy of classes is… (More)

A new proof of a result due to Vapnik is given. Its implications for the theory of PAC learnability are discussed, with particular reference to the learnability of functions taking values in a countable set. An application to the theory of artificial neural networks is then given.

In this paper we investigate a parameter defined for any graph, known as the Vapnik-Chervonenkis dimension (or VC dimension). For any vertex x of a graph G, the closed neighbourhood N (x) of x is the set of all vertices of G adjacent to x, together with x. We say that a set D of vertices of G is shattered if every subset R of D can be realised as R = D ∩ N… (More)

The paper introduces a framework for studying structural risk minimisation. The model views structural risk minimisation in a PAC context. It then considers the more general case when the hierarchy of classes is chosen in response to the data. This theoretically explains the impressive performance of the maximal margin hyperplane algorithm of Vapnik. It may… (More)

Some recent work [7, 14, 15] in computational learning theory has discussed learning in situations where the teacher is helpful, and can choose to present carefully chosen sequences of labelled examples to the learner. We say a function <italic>t</italic> in a set <italic>H</italic> of functions (a hypothesis space) defined on a set <italic>X</italic> is… (More)

In this paper we consider the generalization accuracy of classification methods based on the iterative use of linear classifiers. The resulting classifiers, which we call threshold decision lists act as follows. Some points of the data set to be classified are given a particular classification according to a linear threshold function (or hyperplane). These… (More)

Th& paper aims to p&ce neural networks in the conte.\t ol'booh'an citz'ldt complexit.l: 1,1~, de/itte aplm~priate classes qlfeedybrward neural networks with specified fan-in, accm'ac)' olcomputation and depth and ttsing techniques" o./commzmication comph:¥ity proceed to show t/tat the classes.fit into a well-studied hieralz'h)' q/boolean circuits. Results… (More)