#### Filter Results:

#### Publication Year

2002

2006

#### Publication Type

#### Co-author

#### Publication Venue

#### Data Set Used

#### Key Phrases

Learn More

Feature selection is the task of choosing a small set out of a given set of features that capture the relevant properties of the data. In the context of supervised classification problems the relevance is determined by the given labels on the training data. A good choice of features is a key for building compact and accurate classifiers. In this paper we… (More)

Prototypes based algorithms are commonly used to reduce the computational complexity of Nearest-Neighbour (NN) classifiers. In this paper we discuss theoretical and algorithmical aspects of such algorithms. On the theory side, we present margin based generalization bounds that suggest that these kinds of classifiers can be more accurate then the 1-NN rule.… (More)

The Query By Committee (QBC) algorithm is the only algorithm in the active learning framework that has a full theoretical justification. It was proved that it can reduce exponentially the number of labels needed for learning. Unfortunately, a naive implementation of this algorithm is impossible due to impractical time complexity. In this paper we make… (More)

Training a learning algorithm is a costly task. A major goal of active learning is to reduce this cost. In this paper we introduce a new algorithm , KQBC, which is capable of actively learning large scale problems by using selective sampling. The algorithm overcomes the costly sampling step of the well known Query By Committee (QBC) algorithm by projecting… (More)

We present a non-linear, simple, yet effective, feature subset selection method for regression and use it in analyzing cortical neural activity. Our algorithm involves a feature-weighted version of the k-nearest-neighbor algorithm. It is able to capture complex dependency of the target function on its input and makes use of the leave-one-out error as a… (More)

A fundamental question in learning theory is the quantifi-cation of the basic tradeoff between the complexity of a model and its predictive accuracy. One valid way of quantifying this tradeoff, known as the " Information Bottleneck " , is to measure both the complexity of the model and its prediction accuracy by using Shannon's mutual information. In this… (More)

Feature selection is the task of choosing a small subset of features that is sufficient to predict the target labels well. Here, instead of trying to directly determine which features are better, we attempt to learn the properties of good features. For this purpose we assume that each feature is represented by a set of properties, referred to as… (More)

ii iii This work was carried out under the supervision of Prof. Naftali Tishby. iv Acknowledgments Many people helped me in many ways over the course of my Ph.D. studies and I would like to take this opportunity to thank them all. A certain number of people deserve special thanks and I would like to express my gratitude to them with a few words here. The… (More)

- ‹
- 1
- ›