• Publications
  • Influence
Accelerated Training for Matrix-norm Regularization: A Boosting Approach
TLDR
In this paper, we propose a boosting method for regularized learning that guarantees e accuracy within O(1 /e) iterations. Expand
  • 91
  • 13
  • PDF
Convex Multi-view Subspace Learning
TLDR
In this paper, we present a convex formulation of multi-view subspace learning that enforces conditional independence while reducing dimensionality. Expand
  • 109
  • 9
  • PDF
Tailoring density estimation via reproducing kernel moment matching
TLDR
Moment matching is a popular means of parametric density estimation. Expand
  • 50
  • 6
  • PDF
Inter-Comparison of High-Resolution Satellite Precipitation Products over Central Asia
TLDR
This paper examines the spatial error structures of eight precipitation estimates derived from four different satellite retrieval algorithms including TRMM Multi-satellite Precipitation Analysis (TMPA), Climate Prediction Center morphing technique (CMORPH), Global Satellite Mapping of Precipitations (GSMaP) and Precipient Estimation from Remotely Sensed Information using Artificial Neural Networks (PERSIANN). Expand
  • 102
  • 5
Hyperparameter Learning for Graph Based Semi-supervised Learning Algorithms
TLDR
We propose a graph learning method for the harmonic energy minimization method; this is done by minimizing the leave-one-out prediction error on labeled data points. Expand
  • 54
  • 5
  • PDF
Smoothing multivariate performance measures
TLDR
We show that CPMs converge to an e accurate solution in O(1/λe) iterations, where λ is the trade-off parameter between the regularizer and the loss function. Expand
  • 24
  • 4
  • PDF
Scalable and Sound Low-Rank Tensor Learning
TLDR
We propose directly optimizing the tensor trace norm by approximating its dual spectral norm, and we show that the approximation bounds can be efficiently converted to the original problem via the generalized conditional gradient algorithm. Expand
  • 18
  • 4
  • PDF
Decomposition-Invariant Conditional Gradient for General Polytopes with Line Search
TLDR
We show that by employing an away-step update, similar rates can be generalized to arbitrary polytopes with strong empirical performance. Expand
  • 15
  • 4
  • PDF
Generalized Conditional Gradient for Sparse Estimation
TLDR
We investigate the generalized conditional gradient (GCG) algorithm for solving structured sparse optimization problems---demonstrating that, with some enhancements, it can provide a more efficient alternative to current state of the art approaches. Expand
  • 56
  • 3
  • PDF
Kernel Measures of Independence for non-iid Data
TLDR
We extend the Hilbert Schmidt Independence Criterion from iid data to structured and interdependent data. Expand
  • 28
  • 3
  • PDF