Learn More
In this paper we propose a novel method for learning a Mahalanobis distance measure to be used in the KNN classification algorithm. The algorithm directly maximizes a stochastic variant of the leave-one-out KNN score on the training set. It can also learn a low-dimensional linear embedding of labeled data that can be used for data visualization and fast(More)
In this paper, we describe a statistical video representation and modeling scheme. Video representation schemes are needed to segment a video stream into meaningful video-objects, useful for later indexing and retrieval applications. In the proposed methodology, unsupervised clustering via Gaussian mixture modeling extracts coherent space-time regions in(More)
In this paper we describe a probabilistic image matching scheme in which t h e image representation is continuous, and the similarity measure and distance computation are also deened in the continuous domain. Each image is rst represented as a Gaus-sian mixture distribution and images are compared and matched via a probabilistic measure of similarity(More)
In this paper we propose a novel clustering algorithm based on maximizing the mutual information between data points and clusters. Unlike previous methods, we neither assume the data are given in terms of distributions nor impose any parametric model on the within-cluster distribution. Instead, we utilize a non-parametric estimation of the average cluster(More)
In this work we present two new methods for approximating the Kullback-Liebler (KL) divergence between two mixtures of Gaussians. The first method is based on matching between the Gaussian elements of the two Gaussian mixture densities. The second method is based on the unscented transform. The proposed methods are utilized for image retrieval tasks.(More)
We describe an algorithm for context-based segmentation of visual data. New frames in an image sequence (video) are segmented based on the prior segmentation of earlier frames in the sequence. The segmentation is performed by adapting a probabilistic model learned on previous frames, according to the content of the new frame. We utilize the maximum a(More)
Context representations are central to various NLP tasks, such as word sense disam-biguation, named entity recognition, co-reference resolution, and many more. In this work we present a neural model for efficiently learning a generic context embedding function from large corpora, using bidirectional LSTM. With a very simple application of our context(More)