• Publications
  • Influence
Petuum: A New Platform for Distributed Machine Learning on Big Data
  • E. Xing, Q. Ho, +7 authors Y. Yu
  • Computer Science
  • IEEE Transactions on Big Data
  • 30 December 2013
TLDR
We propose a general-purpose framework, Petuum, that systematically addresses data- and model-parallel challenges in large-scale ML, by observing that many ML programs are fundamentally optimization-centric and admit error-tolerant, iterative-convergent algorithmic solutions. Expand
  • 197
  • 24
  • PDF
Petuum: A New Platform for Distributed Machine Learning on Big Data
TLDR
We propose a general-purpose framework, Petuum, that systematically addresses data- and model-parallel challenges in large-scale ML, by observing that many ML programs are fundamentally optimization-centric and admit error-tolerant, iterative-convergent algorithmic solutions. Expand
  • 190
  • 18
Accelerated Training for Matrix-norm Regularization: A Boosting Approach
TLDR
In this paper, we propose a boosting method for regularized learning that guarantees e accuracy within O(1 /e) iterations. Expand
  • 91
  • 13
  • PDF
Additive Approximations in High Dimensional Nonparametric Regression via the SALSA
TLDR
We propose SALSA, which bridges this gap by allowing interactions between variables, but controls model capacity by limiting the order of interactions. Expand
  • 28
  • 10
  • PDF
Convex Multi-view Subspace Learning
TLDR
In this paper, we present a convex formulation of multi-view subspace learning that enforces conditional independence while reducing dimensionality. Expand
  • 109
  • 9
  • PDF
On Decomposing the Proximal Map
  • Y. Yu
  • Mathematics, Computer Science
  • NIPS
  • 5 December 2013
TLDR
This paper initiates a systematic investigation of when the proximal map of a sum of functions decomposes into the composition of proximal maps of the individual summands. Expand
  • 67
  • 6
  • PDF
Analysis of Kernel Mean Matching under Covariate Shift
TLDR
Focusing on a particular covariate shift problem, we derive high probability confidence bounds for the kernel mean matching (KMM) estimator, whose convergence rate turns out to depend on some regularity measure of regression function and also on some capacity measure of the kernel. Expand
  • 63
  • 5
  • PDF
Better Approximation and Faster Algorithm Using the Proximal Average
  • Y. Yu
  • Mathematics, Computer Science
  • NIPS
  • 5 December 2013
TLDR
We re-examine this powerful methodology and point out a nonsmooth approximation which simply pretends the linearity of the proximal map. Expand
  • 64
  • 5
  • PDF
Efficient Structured Matrix Rank Minimization
TLDR
We study the problem of finding structured low-rank matrices using nuclear norm regularization where the structure is encoded by a linear map. Expand
  • 17
  • 5
  • PDF
Complex Event Detection using Semantic Saliency and Nearly-Isotonic SVM
TLDR
We propose a new prioritizing procedure based on the notion of semantic saliency that assesses the relevance of each shot with the event of interest. Expand
  • 78
  • 4
  • PDF