Adaptive Kernel Value Caching for SVM Training

@article{Li2020AdaptiveKV,
  title={Adaptive Kernel Value Caching for SVM Training},
  author={Q. Li and Zeyi Wen and Bingsheng He},
  journal={IEEE Transactions on Neural Networks and Learning Systems},
  year={2020},
  volume={31},
  pages={2376-2386}
}
  • Q. Li, Zeyi Wen, Bingsheng He
  • Published 8 November 2019
  • Computer Science
  • IEEE Transactions on Neural Networks and Learning Systems
Support vector machines (SVMs) can solve structured multioutput learning problems such as multilabel classification, multiclass classification, and vector regression. SVM training is expensive, especially for large and high-dimensional data sets. The bottleneck of the SVM training often lies in the kernel value computation. In many real-world problems, the same kernel values are used in many iterations during the training, which makes the caching of kernel values potentially useful. The… 
Decision boundary clustering for efficient local SVM
Sequential Minimal Optimization for One-Class Slab Support Vector Machine
TLDR
This paper proposes fast training method for One Class Slab SVMs using an updated Sequential Minimal Optimization (SMO) which divides the multi variable optimization problem to smaller sub problems of size two that can then be solved analytically.
A Survey on Federated Learning Systems: Vision, Hype and Reality for Data Privacy and Protection
TLDR
A comprehensive review on federated learning systems is conducted and a thorough categorization is provided according to six different aspects, including data distribution, machine learning model, privacy mechanism, communication architecture, scale of federation and motivation of federation.
Development and Validation of a Prediction Model for Elevated Arterial Stiffness in Chinese Patients With Diabetes Using Machine Learning
TLDR
The gradient boosting-based prediction system presents a good classification in elevated arterial stiffness prediction and is easily accessible for further clinical studies and utilization.
A Support Vector Regression-Based Integrated Navigation Method for Underwater Vehicles
TLDR
In this work, the INS/DVL integrated navigation system model is established to deal with DVL malfunctions, and the support vector regression (SVR) algorithm is used to establish the velocity regression prediction model of DVL.
A fast learning algorithm for One-Class Slab Support Vector Machines

References

SHOWING 1-10 OF 24 REFERENCES
Accelerated Asynchronous Greedy Coordinate Descent Algorithm for SVMs
TLDR
An asynchronous accelerated greedy coordinate descent algorithm (AsyAGCD) for SVMs that can handle more SVM formulations (including binary classification and regression SVMs) than AsyGCD and is much faster than the existing SVM solvers (including AsYGCD).
Learning with Idealized Kernels
TLDR
This paper considers the problem of adapting the kernel so that it becomes more similar to the so-called ideal kernel as a distance metric learning problem that searches for a suitable linear transform (feature weighting) in the kernel-induced feature space.
Core Vector Machines: Fast SVM Training on Very Large Data Sets
TLDR
This paper shows that many kernel methods can be equivalently formulated as minimum enclosing ball (MEB) problems in computational geometry and obtains provably approximately optimal solutions with the idea of core sets, and proposes the proposed Core Vector Machine (CVM) algorithm, which can be used with nonlinear kernels and has a time complexity that is linear in m.
MASCOT: Fast and Highly Scalable SVM Cross-Validation Using GPUs and SSDs
TLDR
This paper proposes a scheme to dramatically improve the scalability and efficiency of SVM cross-validation through the following key ideas: precompute the kernel values and reuse them, store the precomputed kernel values to a high-speed storage framework, and design a parallel kernel value read algorithm.
ThunderSVM: A Fast SVM Library on GPUs and CPUs
TLDR
An efficient and open source SVM software toolkit called ThunderSVM which exploits the high-performance of Graphics Processing Units (GPUs) and multi-core CPUs and designs a convex optimization solver in a general way such that SVC, SVR, and one-class SVMs share the same solver for the ease of maintenance.
Making large scale SVM learning practical
TLDR
This chapter presents algorithmic and computational results developed for SVM light V 2.0, which make large-scale SVM training more practical and give guidelines for the application of SVMs to large domains.
Sequential Minimal Optimization : A Fast Algorithm for Training Support Vector Machines
This paper proposes a new algorithm for training support vector machines: Sequential Minimal Optimization, or SMO. Training a support vector machine requires the solution of a very large quadratic
Efficient Multi-Class Probabilistic SVMs on GPUs
TLDR
GMP-SVM is proposed to reduce high latency memory accesses and memory consumption through batch processing, computation/data reusing and sharing, and Experimental results show that the solution outperforms LibSVM by 100 times while retaining the same accuracy.
A GPU-tailored approach for training kernelized SVMs
TLDR
This work presents a method for efficiently training binary and multiclass kernelized SVMs on a Graphics Processing Unit (GPU) through the use of a novel clustering technique, which is orders of magnitude faster then existing CPU libraries, and several times faster than prior GPU approaches.
Active Learning with Multi-Label SVM Classification
TLDR
This paper first proposes two novel multi-label active learning strategies, a max-margin prediction uncertainty strategy and a label cardinality inconsistency strategy, and then integrates them into an adaptive framework of multi- label active learning.
...
1
2
3
...