Learn More
We investigate the problem of learning concepts by presenting labeled and randomly chosen training–examples to single neurons. It is well-known that linear halfspaces are learnable by the method of linear programming. The corresponding (Mc-Culloch-Pitts) neurons are therefore efficiently trainable to learn an unknown halfspace from examples. We want(More)
This paper considers the embeddability of general concept classes in Euclidean half spaces. By embedding in half spaces we refer to a mapping from some concept class to half spaces so that the labeling given to points in the instance space is retained. The existence of an embedding for some class may be used to learn it using an algorithm for the class it(More)
The n-cube network is called faulty if it contains any faulty processor or any faulty link. For any number k we are interested in the minimum number f(n, k) of faults, necessary for an adversary to make any (n-k)-dimensional subcube faulty. Reversely formulated: The existence of a (n-k)- dimensional nonfaulty subcube can be guaranteed, unless there are at(More)
This paper is concerned with the combinatorial structure of concept classes that can be learned from a small number of examples. We show that the recently introduced notion of recursive teaching dimension (RTD, reflecting the complexity of teaching a concept class) is a relevant parameter in this context. Comparing the RTD to self-directed learning, we(More)
Concept classes can canonically be represented by matrices with entries 1 and ?1. We use the singular value decomposition of this matrix to determine the optimal margins of embeddings of the concept classes of singletons and of half spaces in homogeneous Euclidean half spaces. For these concept classes the singular value decomposition can be used to(More)