• Publications
  • Influence
Improvements to Platt's SMO Algorithm for SVM Classifier Design
TLDR
Using clues from the KKT conditions for the dual problem, two threshold parameters are employed to derive modifications of SMO that perform significantly faster than the original SMO on all benchmark data sets tried.
A Robust Minimax Approach to Classification
TLDR
This work considers a binary classification problem where the mean and covariance matrix of each class are assumed to be known, and addresses the issue of robustness with respect to estimation errors via a simple modification of the input data.
Improvements to the SMO algorithm for SVM regression
TLDR
Using clues from the KKT conditions for the dual problem, two threshold parameters are employed to derive modifications of SMO for regression that perform significantly faster than the original SMO on the datasets tried.
A fast iterative nearest point algorithm for support vector machine classifier design
TLDR
Comparative computational evaluation of the new fast iterative algorithm against powerful SVM methods such as Platt's sequential minimal optimization shows that the algorithm is very competitive.
RESIDE: Improving Distantly-Supervised Neural Relation Extraction using Side Information
TLDR
RESIDE is a distantly-supervised neural relation extraction method which utilizes additional side information from KBs for improved relation extraction and employs Graph Convolution Networks to encode syntactic information from text and improves performance even when limited side information is available.
Minimax Probability Machine
TLDR
This desideratum is translated in a very direct way into an optimization problem, which is solved using methods from convex optimization, and a worst-case bound on the probability of misclassification of future data is obtained explicitly.
Maximum Margin Classifiers with Specified False Positive and False Negative Error Rates
TLDR
This paper addresses the problem of maximum margin classification given the moments of class conditional densities and the false positive and false negative error rates using Chebyshev inequalities, and extends the formulation to non-linear classifiers using kernel methods.
Second Order Cone Programming Approaches for Handling Missing and Uncertain Data
TLDR
A novel second order cone programming formulation for designing robust classifiers which can handle uncertainty in observations and which outperform imputation in the case of missing values in observations is proposed.
Incorporating Syntactic and Semantic Information in Word Embeddings using Graph Convolutional Networks
TLDR
Word embeddings learned by SynGCN outperform existing methods on various intrinsic and extrinsic tasks and provide an advantage when used with ELMo and an effective framework for incorporating diverse semantic knowledge for further enhancing learned word representations are proposed.
Structured learning for non-smooth ranking losses
TLDR
This paper proposes new, almost-linear-time algorithms to optimize for two other criteria widely used to evaluate search systems: MRR (mean reciprocal rank) and NDCG (normalized discounted cumulative gain) in the max-margin structured learning framework.
...
1
2
3
4
5
...