Scalable Kernel Methods via Doubly Stochastic Gradients

@inproceedings{Dai2014ScalableKM,
  title={Scalable Kernel Methods via Doubly Stochastic Gradients},
  author={Bo Dai and Bo Xie and Niao He and Yingyu Liang and Anant Raj and Maria-Florina Balcan and Le Song},
  booktitle={NIPS},
  year={2014}
}
The general perception is that kernel methods are not scalable, and neural nets are the methods of choice for large-scale nonlinear learning problems. Or have we simply not tried hard enough for kernel methods? Here we propose an approach that scales up kernel methods using a novel concept called “doubly stochastic functional gradients”. Our approach relies on the fact that many kernel methods can be expressed as convex optimization problems, and we solve the problems by making two unbiased… CONTINUE READING
Highly Influential
This paper has highly influenced 12 other papers. REVIEW HIGHLY INFLUENTIAL CITATIONS
Highly Cited
This paper has 140 citations. REVIEW CITATIONS
Related Discussions
This paper has been referenced on Twitter 16 times. VIEW TWEETS

Citations

Publications citing this paper.
Showing 1-10 of 99 extracted citations

141 Citations

0204060'14'15'16'17'18'19
Citations per Year
Semantic Scholar estimates that this publication has 141 citations based on the available data.

See our FAQ for additional information.

References

Publications referenced by this paper.
Showing 1-10 of 34 references

Online learning with kernels

IEEE Transactions on Signal Processing • 2001
View 10 Excerpts
Highly Influenced

Stochastic Block Mirror Descent Methods for Nonsmooth and Stochastic Optimization

SIAM Journal on Optimization • 2015
View 4 Excerpts
Highly Influenced

Efficiency of Coordinate Descent Methods on Huge-Scale Optimization Problems

SIAM Journal on Optimization • 2012
View 4 Excerpts
Highly Influenced

Random Laplace Feature Maps for Semigroup Kernels on Histograms

2014 IEEE Conference on Computer Vision and Pattern Recognition • 2014
View 1 Excerpt

Similar Papers

Loading similar papers…