#### Filter Results:

- Full text PDF available (85)

#### Publication Year

1950

2017

- This year (5)
- Last 5 years (45)
- Last 10 years (79)

#### Publication Type

#### Co-author

#### Journals and Conferences

#### Data Set Used

#### Key Phrases

Learn More

- S. Sathiya Keerthi, Shirish K. Shevade, Chiranjib Bhattacharyya, K. R. K. Murthy
- Neural Computation
- 2001

This article points out an important source of inefficiency in Platt’s sequential minimal optimization (SMO) algorithm that is caused by the use of a single threshold value. Using clues from the KKT conditions for the dual problem, two threshold parameters are employed to derive modifications of SMO. These modified algorithms perform significantly faster… (More)

- Gert R. G. Lanckriet, Laurent El Ghaoui, Chiranjib Bhattacharyya, Michael I. Jordan
- Journal of Machine Learning Research
- 2002

When constructing a classifier, the probability of correct classification of future data points should be maximized. We consider a binary classification problem where the mean and covariance matrix of each class are assumed to be known. No further assumptions are made with respect to the class-conditional distributions. Misclassification probabilities are… (More)

- S. Sathiya Keerthi, Shirish K. Shevade, Chiranjib Bhattacharyya, K. R. K. Murthy
- IEEE Trans. Neural Netw. Learning Syst.
- 2000

In this paper we give a new fast iterative algorithm for support vector machine (SVM) classifier design. The basic problem treated is one that does not allow classification violations. The problem is converted to a problem of computing the nearest point between two convex polytopes. The suitability of two classical nearest point algorithms, due to Gilbert,… (More)

- Shirish K. Shevade, S. Sathiya Keerthi, Chiranjib Bhattacharyya, K. R. K. Murthy
- IEEE Trans. Neural Netw. Learning Syst.
- 2000

This paper points out an important source of inefficiency in Smola and Schölkopf's sequential minimal optimization (SMO) algorithm for support vector machine (SVM) regression that is caused by the use of a single threshold value. Using clues from the KKT conditions for the dual problem, two threshold parameters are employed to derive modifications of SMO… (More)

Learning to rank from relevance judgment is an active research area. Itemwise score regression, pairwise preference satisfaction, and listwise structured learning are the major techniques in use. Listwise structured learning has been applied recently to optimize important non-decomposable ranking criteria like AUC (area under ROC curve) and MAP (mean… (More)

When constructing a classifier, the probability of correct classification of future data points should be maximized. In the current paper this desideratum is translated in a very direct way into an optimization problem, which is solved using methods from convex optimization. We also show how to exploit Mercer kernels in this setting to obtain nonlinear… (More)

This paper addresses the problem of maximum margin classification given the moments of class conditional densities and the false positive and false negative error rates. Using Chebyshev inequalities, the problem can be posed as a second order cone programming problem. The dual of the formulation leads to a geometric optimization problem, that of computing… (More)

- Pannagadatta K. Shivaswamy, Chiranjib Bhattacharyya, Alexander J. Smola
- Journal of Machine Learning Research
- 2006

We propose a novel second order cone programming formulation for designing robust classifiers which can handle uncertainty in observations. Similar formulations are also derived for designing regression functions which are robust to uncertainties in the regression setting. The proposed formulations are independent of the underlying distribution, requiring… (More)

Motivated from real world problems, like object categorization, we study a particular mixed-norm regularization for Multiple Kernel Learning (MKL). It is assumed that the given set of kernels are grouped into distinct components where each component is crucial for the learning task at hand. The formulation hence employs l∞ regularization for promoting… (More)

- Chiranjib Bhattacharyya, L. R. Grate, Michael I. Jordan, Laurent El Ghaoui, I. Saira Mian
- Journal of Computational Biology
- 2004

Molecular profiling studies can generate abundance measurements for thousands of transcripts, proteins, metabolites, or other species in, for example, normal and tumor tissue samples. Treating such measurements as features and the samples as labeled data points, sparse hyperplanes provide a statistical methodology for classifying data points into one of two… (More)