Corpus ID: 156548

Large Scale Transductive SVMs

@article{Collobert2006LargeST,
  title={Large Scale Transductive SVMs},
  author={Ronan Collobert and Fabian H Sinz and J. Weston and L. Bottou},
  journal={J. Mach. Learn. Res.},
  year={2006},
  volume={7},
  pages={1687-1712}
}
We show how the concave-convex procedure can be applied to transductive SVMs, which traditionally require solving a combinatorial search problem. This provides for the first time a highly scalable algorithm in the nonlinear case. Detailed experiments verify the utility of our approach. Software is available at http://www.kyb.tuebingen.mpg.de/bs/people/fabee/transduction.html . 
1 Trading Convexity for Scalability
Convex learning algorithms, such as Support Vector Machines (SVMs), are often seen as highly desirable because they offer strong practical properties and are amenable to theoretical analysis.Expand
Trading convexity for scalability
TLDR
It is shown how concave-convex programming can be applied to produce faster SVMs where training errors are no longer support vectors, and much faster Transductive SVMs. Expand
Large margin vs. large volume in transductive learning
We consider a large volume principle for transductive learning that prioritizes the transductive equivalence classes according to the volume they occupy in hypothesis space. We approximate volumeExpand
Robust transductive support vector machines
  • Hakan Cevikalp, Merve Elmas
  • Mathematics, Computer Science
  • 2016 24th Signal Processing and Communication Application Conference (SIU)
  • 2016
TLDR
A robust transductive support vector machine (RTSVM) classifier that is suitable for large-scale data is proposed that uses the robust Ramp loss instead of the Hinge loss for labeled data samples. Expand
Efficient Convex Relaxation for Transductive Support Vector Machine
TLDR
This work proposes solving Transductive SVM via a convex relaxation, which converts the NP-hard problem to a semi-definite programming, and shows the promising performance of the proposed algorithm in comparison with other state-of-the-art implementations of Transductives SVM. Expand
Large-scale robust transductive support vector machines
TLDR
A robust and fast transductive support vector machine (RTSVM) classifier that can be applied to large-scale data that uses the robust Ramp loss instead of Hinge loss for labeled data samples is proposed. Expand
Large scale manifold transduction
TLDR
This work shows how the regularizer of Transductive Support Vector Machines can be trained by stochastic gradient descent for linear models and multi-layer architectures, and proposes a natural generalization of the TSVM loss function that takes into account neighborhood and manifold information directly. Expand
Newton Methods for Fast Solution of Semi- supervised Linear SVMs
In this chapter, we present a family of semi-supervised linear support vector classifiers that are designed to handle partially-labeled sparse datasets with possibly very large number of examples andExpand
Large scale semi-supervised linear SVMs
TLDR
An implementation of Transductive SVM (TSVM) that is significantly more efficient and scalable than currently used dual techniques, for linear classification problems involving large, sparse datasets, and a variant of TSVM that involves multiple switching of labels. Expand
A Multi-kernel Framework for Inductive Semi-supervised Learning
TLDR
The multiple kernel version of Transductive SVM (a cluster assumption based approach) is proposed and it is solved based on DC (Difference of Convex functions) programming. Expand
...
1
2
3
4
5
...

References

SHOWING 1-10 OF 47 REFERENCES
Trading convexity for scalability
TLDR
It is shown how concave-convex programming can be applied to produce faster SVMs where training errors are no longer support vectors, and much faster Transductive SVMs. Expand
Learning with Local and Global Consistency
TLDR
A principled approach to semi-supervised learning is to design a classifying function which is sufficiently smooth with respect to the intrinsic structure collectively revealed by known labeled and unlabeled points. Expand
Leveraging the margin more carefully
TLDR
Two leveraging algorithms that build on boosting techniques and employ a bounded loss function of the margin are described, which decomposes a non-convex loss into a difference of two convex losses. Expand
Semi-Supervised Classification by Low Density Separation
TLDR
Three semi-supervised algorithms are proposed: deriving graph-based distances that emphazise low density regions between clusters, followed by training a standard SVM, and optimizing the Transductive SVM objective function by gradient descent. Expand
Convex Methods for Transduction
TLDR
This paper presents a relaxation of the 2-class transduction problem based on semi-definite programming (SDP), resulting in a convex optimization problem that has polynomial complexity in the size of the data set. Expand
Making large-scale support vector machine learning practical
TLDR
This chapter presents algorithmic and computational results developed for SV M light V2.0, which make large-scale SVM training more practical and give guidelines for the application of SVMs to large domains. Expand
Maximum Margin Clustering
TLDR
A new method for clustering based on finding maximum margin hyperplanes through data that leads naturally to a semi-supervised training method for support vector machines by maximizing the margin simultaneously on labeled and unlabeled training data. Expand
Fast Kernel Classifiers with Online and Active Learning
TLDR
This contribution presents an online SVM algorithm based on the premise that active example selection can yield faster training, higher accuracies, and simpler models, using only a fraction of the training example labels. Expand
A Modified Finite Newton Method for Fast Solution of Large Scale Linear SVMs
TLDR
A fast method for solving linear SVMs with L2 loss function that is suited for large scale data mining tasks such as text classification is developed by modifying the finite Newton method of Mangasarian in several ways. Expand
On ψ-Learning
The concept of large margins have been recognized as an important principle in analyzing learning methodologies, including boosting, neural networks, and support vector machines (SVMs). However, thisExpand
...
1
2
3
4
5
...