#### Filter Results:

- Full text PDF available (7)

#### Publication Year

2013

2016

- This year (0)
- Last 5 years (7)
- Last 10 years (7)

#### Publication Type

#### Co-author

#### Publication Venue

#### Key Phrases

Learn More

- Saurabh Paul, Christos Boutsidis, Malik Magdon-Ismail, Petros Drineas
- AISTATS
- 2013

Let X be a data matrix of rank ρ, representing n points in d-dimensional space. The linear support vector machine constructs a hyperplane separator that maximizes the 1-norm soft margin. We develop a new oblivious dimension reduction technique which is precomputed and can be applied to any input matrix X. We prove that, with high probability , the margin… (More)

Let <b>X</b> be a data matrix of rank ρ, whose rows represent <i>n</i> points in <i>d</i>-dimensional space. The linear support vector machine constructs a hyperplane separator that maximizes the 1-norm soft margin. We develop a new oblivious dimension reduction technique that is precomputed and can be applied to any input matrix <b>X</b>. We prove… (More)

- Saurabh Paul, Malik Magdon-Ismail, Petros Drineas
- Pattern Recognition
- 2015

We give two provably accurate feature-selection techniques for the linear SVM. The algorithms run in deterministic and random-ized time respectively. Our algorithms can be used in an unsupervised or supervised setting. The supervised approach is based on sampling features from support vectors. We prove that the margin in the feature space is preserved to… (More)

- Saurabh Paul, Petros Drineas
- ECML/PKDD
- 2014

We introduce a deterministic sampling based feature selection technique for regularized least squares classification. The method is unsupervised and gives worst-case guarantees of the generalization power of the classification function after feature selection with respect to the classification function obtained using all features. We perform experiments on… (More)

- Saurabh Paul
- CIKM
- 2015

Canonical Correlation Analysis (CCA) is a technique that finds how "similar" are the subspaces that are spanned by the columns of two different matrices <b>A</b> έℜ(of size <sup><i>m-x-n</i></sup>) and <b>B</b> έℜ(of size <sup><i>m-x-l</i></sup>). CCA measures similarity by means of the cosines of the so-called principal angles between… (More)

- Saurabh Paul, Malik Magdon-Ismail, Petros Drineas
- NIPS
- 2015

Selecting a good column (or row) subset of massive data matrices has found many applications in data analysis and machine learning. We propose a new adap-tive sampling algorithm that can be used to improve any relative-error column selection algorithm. Our algorithm delivers a tighter theoretical bound on the approximation error which we also demonstrate… (More)

- Saurabh Paul, Petros Drineas
- Neural Computation
- 2016

We introduce single-set spectral sparsification as a deterministic sampling-based feature selection technique for regularized least-squares classification, which is the classification analog to ridge regression. The method is unsupervised and gives worst-case guarantees of the generalization power of the classification function after feature selection with… (More)

- ‹
- 1
- ›