• Publications
  • Influence
Adapting Visual Category Models to New Domains
TLDR
This paper introduces a method that adapts object models acquired in a particular visual domain to new imaging conditions by learning a transformation that minimizes the effect of domain-induced changes in the feature distribution.
Information-theoretic metric learning
TLDR
An information-theoretic approach to learning a Mahalanobis distance function that can handle a wide variety of constraints and can optionally incorporate a prior on the distance function and derive regret bounds for the resulting algorithm.
Learning to Hash with Binary Reconstructive Embeddings
TLDR
An algorithm for learning hash functions based on explicitly minimizing the reconstruction error between the original distances and the Hamming distances of the corresponding binary embeddings is developed.
Kernelized locality-sensitive hashing for scalable image search
  • B. Kulis, K. Grauman
  • Mathematics, Computer Science
    IEEE 12th International Conference on Computer…
  • 1 December 2009
TLDR
It is shown how to generalize locality-sensitive hashing to accommodate arbitrary kernel functions, making it possible to preserve the algorithm's sub-linear time similarity search guarantees for a wide class of useful similarity functions.
Weighted Graph Cuts without Eigenvectors A Multilevel Approach
TLDR
This paper develops a fast high-quality multilevel algorithm that directly optimizes various weighted graph clustering objectives, such as the popular ratio cut, normalized cut, and ratio association criteria, and demonstrates that the algorithm is applicable to large-scale clustering tasks such as image segmentation, social network analysis, and gene network analysis.
Kernel k-means: spectral clustering and normalized cuts
TLDR
The generality of the weighted kernel k-means objective function is shown, and the spectral clustering objective of normalized cut is derived as a special case, leading to a novel weightedkernel k-Means algorithm that monotonically decreases the normalized cut.
What you saw is not what you get: Domain adaptation using asymmetric kernel transforms
TLDR
This paper introduces ARC-t, a flexible model for supervised learning of non-linear transformations between domains, based on a novel theoretical result demonstrating that such transformations can be learned in kernel space.
Kernelized Locality-Sensitive Hashing
  • B. Kulis, K. Grauman
  • Mathematics, Medicine
    IEEE Transactions on Pattern Analysis and Machine…
  • 1 June 2012
TLDR
It is shown how to generalize locality-sensitive hashing to accommodate arbitrary kernel functions, making it possible to preserve the algorithm's sublinear time similarity search guarantees for a wide class of useful similarity functions.
Revisiting k-means: New Algorithms via Bayesian Nonparametrics
TLDR
This paper shows that a Gibbs sampling algorithm for the Dirichlet process mixture approaches a hard clustering algorithm in the limit, and further that the resulting algorithm monotonically minimizes an elegant underlying k-means-like clustering objective that includes a penalty for the number of clusters.
Semi-supervised graph clustering: a kernel approach
TLDR
The proposed objective function for semi-supervised clustering based on Hidden Markov Random Fields, with squared Euclidean distance and a certain class of constraint penalty functions, can be expressed as a special case of the weighted kernel k-means objective.
...
1
2
3
4
5
...