• Publications
  • Influence
Training Invariant Support Vector Machines
TLDR
This work reports the recent achievement of the lowest reported test error on the well-known MNIST digit recognition benchmark task, with SVM training times that are also significantly faster than previous SVM methods.
HD-CNN: Hierarchical Deep Convolutional Neural Networks for Large Scale Visual Recognition
TLDR
This paper introduces hierarchical deepCNNs (HD-CNNs) by embedding deep CNNs into a two-level category hierarchy and achieves state-of-the-art results on both CIFAR100 and large-scale ImageNet 1000-class benchmark datasets.
Building Support Vector Machines with Reduced Classifier Complexity
TLDR
A primal method that decouples the idea of basis functions from the concept of support vectors and greedily finds a set of kernel basis functions of a specified maximum size to approximate the SVM primal cost function well.
A Modified Finite Newton Method for Fast Solution of Large Scale Linear SVMs
TLDR
A fast method for solving linear SVMs with L2 loss function that is suited for large scale data mining tasks such as text classification is developed by modifying the finite Newton method of Mangasarian in several ways.
Collaborative prediction using ensembles of Maximum Margin Matrix Factorizations
TLDR
This paper investigates ways to further improve the performance of MMMF, by casting it within an ensemble approach, and explores and evaluates a variety of alternative ways to define such ensembles.
Support Vector Machine Solvers
This chapter contains sections titled: Introduction, Support Vector Machines, Duality, Sparsity, Early SVM Algorithms, The Decomposition Method, A Case Study: LIBSVM, Conclusion and Outlook, Appendix
Approximation Methods for Gaussian Process Regression
TLDR
A wealth of computationally efficient approximation methods for Gaussian process regression have been recently proposed, but a unifying approach to this problem is given.
Data Parameters: A New Family of Parameters for Learning a Differentiable Curriculum
TLDR
This work is the first curriculum learning method to show gains on large scale image classification and detection tasks and introduces data parameters, which governs their importance in the learning process.
Compact Random Feature Maps
TLDR
The error bounds of CRAFT maps are proved demonstrating their superior kernel reconstruction performance compared to the previous approximation schemes, and it is shown how structured random matrices can be used to efficiently generate CRAFTMaps.
Alpha seeding for support vector machines
A key practical obstacle in applying support vector machines to many large-scale data mining tasks is that SVM's generally scale quadratically (or worse) in the number of examples or support vectors.
...
1
2
3
4
5
...