# CUSVM: A CUDA IMPLEMENTATION OF SUPPORT VECTOR CLASSIFICATION AND REGRESSION

@inproceedings{Carpenter2009CUSVMAC, title={CUSVM: A CUDA IMPLEMENTATION OF SUPPORT VECTOR CLASSIFICATION AND REGRESSION}, author={Austin Carpenter}, year={2009} }

This paper presents cuSVM, a software package for high-speed Support Vector Machine (SVM) training and prediction that exploits the mas- sively parallel processing power of Graphics Processors (GPUs). cuSVM is written in NVIDIA's CUDA C-language GPU programming environment, in- cludes implementations of both classication and regression, and performs SVM training (prediction) at 13-73 (22-172) times the rate of state of the art CPU software.

## 66 Citations

GPU acceleration for support vector machines

- Computer ScienceWIAMIS 2011
- 2011

A GPU-assisted version of the LIBSVM library for Support Vector Machines is presented, porting the computation of the kernel matrix elements to the GPU to significantly decrease the processing time for SVM training without altering the classification results.

Parallel Computing of Support Vector Machines

- Computer ScienceACM Comput. Surv.
- 2019

This survey reviews the state-of-the-art implementations of SVMs, their pros and cons, and suggest possible avenues for future research.

Rgtsvm: Support Vector Machines on a GPU in R

- Computer ScienceArXiv
- 2017

Rgtsvm provides a fast and flexible support vector machine (SVM) implementation for the R language that enables large SVM models to be created by both experienced and novice practitioners.

Evaluating automatically parallelized versions of the support vector machine

- Computer ScienceConcurr. Comput. Pract. Exp.
- 2016

This work develops a directive‐based approach that converts a gradient‐ascent based training algorithm for the CPU to an efficient graphics processing unit (GPU) implementation, and shows an important speed‐up when compared to the CPU and OpenACC versions.

High Performance Implementation of Support Vector Machines Using OpenCL

- Computer Science
- 2014

The objective of this thesis is to accelerate an implementation of Support Vector Machines (SVM) using a heterogeneous computing system programmed using OpenCL in C/C++, and the performance analysis of the acceleration of SVM indicates that performance is hampered by the portions of the SVM training algorithm that are sequential.

Fast Implementation of String-Kernel-Based Support Vector Classifiers by GPU Computing

- Computer ScienceICONIP
- 2010

A GPU based SVM solver for large scale text datasets using Platt's Sequential Minimal Optimization algorithm is proposed, achieving a speedup of 5-40 times over LibSVM running on a high-end traditional processor.

A survey of GPU accelerated SVM

- Computer ScienceACM Southeast Regional Conference
- 2014

This work surveys the mathematical optimization algorithms of SVM training process, as well as GPU accelerated implementations of SVC, which have achieved high performance and speedup.

Parallel Training of a Back-Propagation Neural Network Using CUDA

- Computer Science2010 Ninth International Conference on Machine Learning and Applications
- 2010

This work provides an implementation of the back-propagation algorithm on CUDA, a parallel computing architecture developed by NVIDIA using CUBLAS, a CUDA implementation ofthe Basic Linear Algebra Subprograms library (BLAS), the process is simplified.

ACC-SVM : Accelerating SVM on GPUs using OpenACC

- Computer Science
- 2016

This paper uses OpenACC programming model to parallelize SVM and produces ACC-SVM, and applies the auto-tuning framework to decrease the gap between the CUDA and the OpenACC performance results.

A novel FPGA-based SVM classifier

- Computer Science2010 International Conference on Field-Programmable Technology
- 2010

This work proposes a scalable FPGA architecture for the acceleration of SVM classification, which exploits the device heterogeneity and the dynamic range diversities among the dataset attributes, and introduces the first FPGa-oriented cascade SVM classifier scheme, which intensifies the custom-arithmetic properties of the heterogeneous architecture and boosts the classification performance even more.

## References

SHOWING 1-10 OF 18 REFERENCES

Fast support vector machine training and classification on graphics processors

- Computer ScienceICML '08
- 2008

A solver for Support Vector Machine training run on a GPU, using the Sequential Minimal Optimization algorithm and an adaptive first and second order working set selection heuristic, which achieves speedups of 9-35x over LIBSVM running on a traditional processor.

LIBSVM: A library for support vector machines

- Computer ScienceTIST
- 2011

Issues such as solving SVM optimization problems theoretical convergence multiclass classification probability estimates and parameter selection are discussed in detail.

Parallel sequential minimal optimization for the training of support vector machines

- Computer ScienceIEEE Trans. Neural Networks
- 2006

The parallel SMO is developed using message passing interface (MPI) and shows great speedup on the adult data set and the Mixing National Institute of Standard and Technology (MNIST) data set when many processors are used.

Fast training of support vector machines using sequential minimal optimization, advances in kernel methods

- Computer Science
- 1999

SMO breaks this large quadratic programming problem into a series of smallest possible QP problems, which avoids using a time-consuming numerical QP optimization as an inner loop and hence SMO is fastest for linear SVMs and sparse data sets.

Making large scale SVM learning practical

- Computer Science
- 1998

This chapter presents algorithmic and computational results developed for SVM light V 2.0, which make large-scale SVM training more practical and give guidelines for the application of SVMs to large domains.

An improved training algorithm for support vector machines

- Computer ScienceNeural Networks for Signal Processing VII. Proceedings of the 1997 IEEE Signal Processing Society Workshop
- 1997

This paper presents a decomposition algorithm that is guaranteed to solve the QP problem and that does not make assumptions on the expected number of support vectors.

Support-Vector Networks

- Computer ScienceMachine Learning
- 2004

High generalization ability of support-vector networks utilizing polynomial input transformations is demonstrated and the performance of the support- vector network is compared to various classical learning algorithms that all took part in a benchmark study of Optical Character Recognition.

Improvements to Platt's SMO Algorithm for SVM Classifier Design

- Computer ScienceNeural Computation
- 2001

Using clues from the KKT conditions for the dual problem, two threshold parameters are employed to derive modifications of SMO that perform significantly faster than the original SMO on all benchmark data sets tried.

Gradient-based learning applied to document recognition

- Computer ScienceProc. IEEE
- 1998

This paper reviews various methods applied to handwritten character recognition and compares them on a standard handwritten digit recognition task, and Convolutional neural networks are shown to outperform all other techniques.

A Novel Model of Working Set Selection for SMO Decomposition Methods

- Computer Science19th IEEE International Conference on Tools with Artificial Intelligence(ICTAI 2007)
- 2007

A new model for working set selection in sequential minimal optimization (SMO) decomposition methods is proposed, which selects B as working set without reselection.