Joel Ratsaby

Learn More
1 INTRODUCTION We investigate the tradeoff between labeled The classical problem of learning a classification rule and unlabeled sample complexities in learning can be stated as follows: patterns from classes " 1 " and a classification rule for a parametric two-class " 2 " (or " states of nature ") appear with probabilities problem. In the problem(More)
This paper concerns learning binary-valued functions defined on IR, and investigates how a particular type of ‘regularity’ of hypotheses can be used to obtain better generalization error bounds. We derive error bounds that depend on the sample width (a notion similar to that of sample margin for real-valued functions). This motivates learning algorithms(More)
One of the main problems in machine learning and statistical inference is selecting an appropriate model by which a set of data can be explained. In the absense of any structured prior information aa to the data generating mechanism, one is often forced to consider a range of models, attempting to select the model which best explains the data, based on some(More)
The classical theory of pattern recognition assumes labeled examples appear according to unknown underlying class conditional probability distributions where the pattern classes are picked randomly in a passive manner according to their a priori probabilities. This paper presents experimental results for an incremental nearest-neighbor learning algorithm(More)
In this paper we present a new type of binary classifier defined on the unit cube. This classifier combines some of the aspects of the standard methods that have been used in the logical analysis of data (LAD) and geometric classifiers, with a nearest-neighbor paradigm. We assess the predictive performance of the new classifier in learning from a sample,(More)