Learn More
We investigate the problem of learning concepts by presenting labeled and randomly chosen training–examples to single neurons. It is well-known that linear halfspaces are learnable by the method of linear programming. The corresponding (Mc-Culloch-Pitts) neurons are therefore efficiently trainable to learn an unknown halfspace from examples. We want(More)
This paper considers the embeddability of general concept classes in Euclidean half spaces. By embedding in half spaces we refer to a mapping from some concept class to half spaces so that the labeling given to points in the instance space is retained. The existence of an embedding for some class may be used to learn it using an algorithm for the class it(More)
The best currently known general lower and upper bounds on the number of labeled examples needed for learning a concept class in the PAC framework (the realizable case) do not perfectly match: they leave a gap of order log(1//) (resp. a gap which is logarithmic in another one of the relevant parameters). It is an unresolved question whether there exists an(More)
Concept classes can canonically be represented by matrices with entries 1 and ?1. We use the singular value decomposition of this matrix to determine the optimal margins of embeddings of the concept classes of singletons and of half spaces in homogeneous Euclidean half spaces. For these concept classes the singular value decomposition can be used to(More)