Raman Sankaran

Learn More
Motivated from real world problems, like object categorization, we study a particular mixed-norm regularization for Multiple Kernel Learning (MKL). It is assumed that the given set of kernels are grouped into distinct components where each component is crucial for the learning task at hand. The formulation hence employs l ∞ regularization for promoting(More)
This paper 1 presents novel algorithms and applications for a particular class of mixed-norm reg-ularization based Multiple Kernel Learning (MKL) formulations. The formulations assume that the given kernels are grouped and employ l 1 norm regularization for promoting sparsity within RKHS norms of each group and l s , s ≥ 2 norm regularization for promoting(More)
Multiple Kernel Learning(MKL) on Support Vector Ma-chines(SVMs) has been a popular front of research in recent times due to its success in application problems like Object Categorization. This success is due to the fact that MKL has the ability to choose from a variety of feature kernels to identify the optimal kernel combination. But the initial(More)
Recent literature [1] suggests that embedding a graph on an unit sphere leads to better generalization for graph transduction. However, the choice of optimal embedding and an efficient algorithm to compute the same remains open. In this paper, we show that orthonormal representations, a class of unit-sphere graph em-beddings are PAC learnable. Existing(More)
The failure of LASSO to identify groups of correlated predictors in linear regression has sparked significant research interest. Recently , various norms [1, 2] were proposed, which can be best described as instances of ordered weighted ℓ 1 norms (OWL) [3], as an alternative to ℓ 1 regularization used in LASSO. OWL can identify groups of correlated(More)
  • 1