• Corpus ID: 54448072

GADGET SVM: A Gossip-bAseD sub-GradiEnT Solver for Linear SVMs

  title={GADGET SVM: A Gossip-bAseD sub-GradiEnT Solver for Linear SVMs},
  author={Haimonti Dutta and Nitin Nataraj},
In the era of big data, an important weapon in a machine learning researcher's arsenal is a scalable Support Vector Machine (SVM) algorithm. SVMs are extensively used for solving classification problems. Traditional algorithms for learning SVMs often scale super linearly with training set size which becomes infeasible very quickly for large data sets. In recent years, scalable algorithms have been designed which study the primal or dual formulations of the problem. This often suggests a way to… 

Tables from this paper

Impact of Community Structure on Consensus Machine Learning
There exists a critical level of community structure at which $\tau_\epsilon$ reaches a lower bound and is no longer limited by the presence of communities, and random matrix theory is used to analyze the effects of communities on $\lambda_2$ and consensus.
Abstracts for NERCCS 2021: Fourth Northeast Regional Conference on Complex Systems
The NERCCS 2021 conference will provide a meta-modelling framework for future conferences to explore the role of knowledge representation in the design and implementation of complex systems systems.


A Parallel Decomposition Solver for SVM: Distributed dual ascend using Fenchel Duality
A distributed algorithm for solving large scale support vector machines (SVM) problems that is a parallel block-update scheme derived from the convex conjugate (Fenchel duality) form of the original SVM problem.
Large-Scale Support Vector Machines: Algorithms and Theory
This document surveys work on SVM training methods that target this large-scale learning regime, and discusses why SGD generalizes well even though it is poor at optimization, and describes algorithms such as Pegasos and FOLOS that extend basic SGD to quickly solve the SVM problem.
Parallel Support Vector Machines: The Cascade SVM
An algorithm for support vector machines (SVM) that can be parallelized efficiently and scales to very large problems with hundreds of thousands of training vectors, which can be spread over multiple processors with minimal communication overhead and requires far less memory.
Pegasos: primal estimated sub-gradient solver for SVM
A simple and effective stochastic sub-gradient descent algorithm for solving the optimization problem cast by Support Vector Machines, which is particularly well suited for large text classification problems, and demonstrates an order-of-magnitude speedup over previous SVM learning methods.
Modified Logistic Regression: An Approximation to SVM and Its Applications in Large-Scale Text Categorization
A modified version of LR is used to approximate the optimization of SVM by a sequence of unconstrained optimization problems, and it is proved that this approximation will converge to SVM, and an iterative algorithm called "MLR-CG" which uses Conjugate Gradient as its inner loop is proposed.
Optimized cutting plane algorithm for support vector machines
OCAS significantly outperforms current state of the art SVM solvers, like SVM light, SVMperf and BMRM, achieving speedups of over 1,000 on some datasets over SVMlight and 20 over S VMperf, while obtaining the same precise Support Vector solution.
Coordinate Descent Method for Large-scale L2-loss Linear Support Vector Machines
A novel coordinate descent algorithm for training linear SVM with the L2-loss function that is more efficient and stable than state of the art methods such as Pegasos and TRON.
Solving Large Scale Linear SVM with Distributed Block Minimization
This work presents a new algorithm for training linear Support Vector Machines over large datasets that assumes that the dataset is partitioned over several nodes on a cluster and performs a distributed block minimization along with the subsequent line search.
Consensus-Based Distributed Support Vector Machines
This paper develops algorithms to train support vector machines when training data are distributed across different nodes, and their communication to a centralized processing unit is prohibited due
A Modified Finite Newton Method for Fast Solution of Large Scale Linear SVMs
A fast method for solving linear SVMs with L2 loss function that is suited for large scale data mining tasks such as text classification is developed by modifying the finite Newton method of Mangasarian in several ways.